Displaying 20 results from an estimated 4000 matches similar to: "[CTDB] Loop print disconnect, unable to establish tcp link"
2024 Sep 24
0
[CTDB] Loop print disconnect, unable to establish tcplink
Thank you very much for your reply, Martin.
I am confident that the nodes files for all nodes are the same and there are no comments inside. If there are nodes with different node files, then some nodes should print 'Refused connection from unknown node', but this print is not found among all nodes.
All servers are not virtual machines, but I cannot confirm if the network is abnormal
2023 Nov 26
1
CTDB: some problems about disconnecting the private network of ctdb leader nodes
My ctdb version is 4.17.7
Hello, everyone.
My ctdb cluster configuration is correct and the cluster is healthy before operation.
My cluster has three nodes, namely host-192-168-34-164, host-192-168-34-165, and host-192-168-34-166. And the node host-192-168-34-164 is the leader before operation.
I conducted network oscillation testing on node host-192-168-34-164?I down the interface of private
2023 Feb 15
1
ctdb tcp kill: remaining connections
Hi Uli,
[Sorry for slow response, life is busy...]
On Mon, 13 Feb 2023 15:06:26 +0000, Ulrich Sibiller via samba
<samba at lists.samba.org> wrote:
> we are using ctdb 4.15.5 on RHEL8 (Kernel
> 4.18.0-372.32.1.el8_6.x86_64) to provide NFS v3 (via tcp) to RHEL7/8
> clients. Whenever an ip takeover happens most clients report
> something like this:
> [Mon Feb 13 12:21:22
2019 Jan 18
3
testparm: /var/run/ctdb/ctdb.socket missing
Apologies in advance, but I have been banging my head against this
and the only Google results I've found are from 2014, and don't work
(or apply).
OS: Ubuntu 18.04 bionic
smbd: 4.9.4-Debian (the apt.van-belle.nl version)
When I run `testparm` I get:
rlimit_max: increasing rlimit_max (8192) to minimum Windows limit
(16384)
WARNING: The "syslog" option is deprecated
2023 Feb 16
1
ctdb tcp kill: remaining connections
Martin Schwenke schrieb am 15.02.2023 23:23:
> Hi Uli,
>
> [Sorry for slow response, life is busy...]
Thanks for answering anyway!
> On Mon, 13 Feb 2023 15:06:26 +0000, Ulrich Sibiller via samba
> OK, this part looks kind-of good. It would be interesting to know how
> long the entire failover process is taking.
What exactly would you define as the begin and end of the
2020 Mar 03
1
start/stop ctdb
On 03/03/2020 09:13, Ralph Boehme via samba wrote:
> Am 3/3/20 um 10:05 AM schrieb Micha Ballmann via samba:
>> Mar? 3 09:50:50 ctdb1 systemd[1]: Starting CTDB...
>> Mar? 3 09:50:50 ctdb1 ctdbd[24663]: CTDB starting on node
>> Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Starting CTDBD (Version 4.11.6) as
>> PID: 24667
>> Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Created PID file
2015 Aug 04
3
Does CTDB run under LXC containers?
We're transitioning from a VM based environment to one that uses LXC
based containers running under CentOS 7. CTDB runs fine under our CentOS
7 VMs. The same packages running under LXC however seem to have issues:
# systemctl start ctdb.service
Job for ctdb.service failed. See 'systemctl status ctdb.service' and
'journalctl -xn' for details.
# systemctl status ctdb.service
2018 May 04
2
CTDB Path
Hello,
at this time i want to install a CTDB Cluster with SAMBA 4.7.7 from SOURCE!
I compiled samba as follow:
|./configure| |--with-cluster-support
||--with-shared-modules=idmap_rid,idmap_tdb2,idmap_ad|
The whole SAMBA enviroment is located in /usr/local/samba/.
CTDB is located in /usr/local/samba/etc/ctdb.
I guess right that the correct path of ctdbd.conf (node file, public
address file
2018 Feb 26
2
[ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time.
Which cases will the "unable to take lock" errror be output?
Thanks!
The following the ctdb logs:
2018/02/12 19:38:51.147959 ctdbd[5615]: CTDB starting on node
2018/02/12 19:38:51.528921 ctdbd[6602]: Starting CTDBD (Version 4.6.10) as PID: 6602
2018/02/12 19:38:51.529060
2015 Aug 04
1
Does CTDB run under LXC containers?
I'm using libvirt_lxc and that has an XML based configuration. Based on
what I've read, I think I need to add this to the ctdb container's config:
<features>
<capabilities policy='default'>
<sys_nice state='on'/>
</capabilities>
</features>
That didn't do the trick though. I need to figure out how to turn on all
caps to
2019 Oct 03
2
CTDB and nfs-ganesha
Hi Max,
On Wed, 2 Oct 2019 15:08:43 +0000, Max DiOrio
<Max.DiOrio at ieeeglobalspec.com> wrote:
> As soon as I made the configuration change and restarted CTDB, it crashes.
>
> Oct 2 11:05:14 hq-6pgluster01 systemd: Started CTDB.
> Oct 2 11:05:21 hq-6pgluster01 systemd: ctdb.service: main process exited, code=exited, status=1/FAILURE
> Oct 2 11:05:21 hq-6pgluster01
2019 May 16
3
CTDB node stucks in " ctdb-eventd[13184]: 50.samba: samba not listening on TCP port 445"
Hi Martin,
> Does a useful ps command (e.g. ps auxww) show smbd running? What do
> the smbd logs say?
no,
root at ctdb-<HOST>:~# ps auxwww | grep smbd
root 13192 0.0 0.0 12780 940 pts/0 S+ 13:05 0:00 grep smbd
There is no smbd, nmbd is running.
Ahh, but the log says:
[2019/05/16 11:26:38.702593, 0]
../source3/smbd/server.c:1519(smbd_claim_version)
2020 Mar 03
3
start/stop ctdb
Hi,
i configured a running three node CTDB SAMBA cluster (hope so). Virtual
ip's are floating between the nodes after faling one of them. Im using
SAMBA 4.11.6 on Ubuntu 18.04. Samba was compiled from source. I
configured some systemD start/stop scripts for samba (smbd, nmbd) and
winbind, also disabled them for manage via ctdb.
I enabled ctdb to mange samba and winbind via this
2015 Aug 04
3
Does CTDB run under LXC containers?
On 2015-08-04 at 19:27 +0200, Ralph Böhme wrote:
> Hi Peter,
>
> On Tue, Aug 04, 2015 at 10:11:56AM -0700, Peter Steele wrote:
> > We're transitioning from a VM based environment to one that uses LXC based
> > containers running under CentOS 7. CTDB runs fine under our CentOS 7 VMs.
> > The same packages running under LXC however seem to have issues:
> >
>
2019 May 16
2
CTDB node stucks in " ctdb-eventd[13184]: 50.samba: samba not listening on TCP port 445"
Hi everybody,
I just updated my ctdb node from Samba version
4.9.4-SerNet-Debian-11.stretch to Samba version
4.9.8-SerNet-Debian-13.stretch.
After restarting the sernet-samba-ctdbd service the node doesn't come
back and remains in state "UNHEALTHY".
I can find that in the syslog:
May 16 11:25:40 ctdb-lbn1 ctdb-eventd[13184]: 50.samba: samba not
listening on TCP port 445
May 16
2020 Mar 03
5
start/stop ctdb
Hi,
i updated the variables for my scenario. But CTDB wont start:
Mar? 3 09:50:50 ctdb1 systemd[1]: Starting CTDB...
Mar? 3 09:50:50 ctdb1 ctdbd[24663]: CTDB starting on node
Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Starting CTDBD (Version 4.11.6) as
PID: 24667
Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Created PID file
/usr/local/samba/var/run/ctdb/ctdbd.pid
Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Removed
2014 Aug 16
1
CTDB: Failed to connect client socket to daemon.
Ubuntu 14.04, ctdb 2.5.3, samba 4.1.11. CTDB is working with IP takeover
between the 2 nodes. The machine is joined to the domain.
Any help with the following errors would be most gratefully received.
1. connect to socket error:
ctdb status
2014/08/16 15:32:03.248034 [23255]: client/ctdb_client.c:267 Failed to
connect client socket to daemon. Errno:Connection refused(111)
common/cmdline.c:156
2018 May 07
2
CTDB Path
Hello,
i'm still trying to find out what is the right path for ctdb.conf
(ubuntu 18.04, samba was compiled from source!!).
When im trying to start CTDB without any config file, my log in
/usr/local/samba/var/log/log.ctdb shows me:
2018/05/07 12:56:44.363513 ctdbd[4503]: Failed to read nodes file
"/usr/local/samba/etc/ctdb/nodes"
2018/05/07 12:56:44.363546 ctdbd[4503]: Failed to
2018 Feb 26
2
答复: [ctdb] Unable to take recovery lock - contention
------------------原始邮件------------------
发件人:朱尚忠10137461
收件人:samba@lists.samba.org <samba@lists.samba.org>
日 期 :2018年02月26日 17:10
主 题 :[ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time.
Which cases will the "unable to take lock" errror be output?
Thanks!
The
2023 Feb 13
1
ctdb tcp kill: remaining connections
Hello,
we are using ctdb 4.15.5 on RHEL8 (Kernel 4.18.0-372.32.1.el8_6.x86_64) to provide NFS v3 (via tcp) to RHEL7/8 clients. Whenever an ip takeover happens most clients report something like this:
[Mon Feb 13 12:21:22 2023] nfs: server x.x.253.252 not responding, still trying
[Mon Feb 13 12:21:28 2023] nfs: server x.x.253.252 not responding, still trying
[Mon Feb 13 12:22:31 2023] nfs: server