similar to: samba ctdb doesn't set default gateway properly on second node;

Displaying 20 results from an estimated 6000 matches similar to: "samba ctdb doesn't set default gateway properly on second node;"

2019 Oct 03
2
CTDB and nfs-ganesha
Hi Max, On Wed, 2 Oct 2019 15:08:43 +0000, Max DiOrio <Max.DiOrio at ieeeglobalspec.com> wrote: > As soon as I made the configuration change and restarted CTDB, it crashes. > > Oct 2 11:05:14 hq-6pgluster01 systemd: Started CTDB. > Oct 2 11:05:21 hq-6pgluster01 systemd: ctdb.service: main process exited, code=exited, status=1/FAILURE > Oct 2 11:05:21 hq-6pgluster01
2019 Oct 05
2
CTDB and nfs-ganesha
Hi Max, On Fri, 4 Oct 2019 14:01:22 +0000, Max DiOrio <Max.DiOrio at ieeeglobalspec.com> wrote: > Looks like this is the actual error: > > 2019/10/04 09:51:29.174870 ctdbd[17244]: Recovery has started > 2019/10/04 09:51:29.174982 ctdbd[17244]: ../ctdb/server/ctdb_server.c:188 ctdb request 2147483554 of type 8 length 48 from node 1 to 0 > 2019/10/04 09:51:29.175021
2014 Dec 05
1
CTDB port 445 ERROR
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello, I just set up a CTDB-Cluster with two nodes. In /var/log/log.ctdb I saw the following error many times: - ---------- ERROR: samba tcp port 445, is not responding - ---------- the command "ctdb scriptstatus" is showing: - ---------- root at fs1:~# ctdb scriptstatus 11 scripts were executed last monitor cycle 00.ctdb
2017 Nov 08
1
ctdb vacuum timeouts and record locks
Hi Martin, Thanks for your answer... >> I am using the 10.external. ip addr show shows the correct IP addresses >> on eth0 in the lxc container. rebooted the physical machine, this node >> is buggered. shut it down, used ip addr add to put the addresses on the >> other node, used ctdb addip and the node took it and node1 is now >> functioning with all 4 IPs just
2023 Feb 16
1
ctdb tcp kill: remaining connections
On Thu, 16 Feb 2023 17:30:37 +0000, Ulrich Sibiller <ulrich.sibiller at atos.net> wrote: > Martin Schwenke schrieb am 15.02.2023 23:23: > > OK, this part looks kind-of good. It would be interesting to know how > > long the entire failover process is taking. > > What exactly would you define as the begin and end of the failover? From "Takeover run
2023 Feb 16
1
ctdb tcp kill: remaining connections
Martin Schwenke schrieb am 15.02.2023 23:23: > Hi Uli, > > [Sorry for slow response, life is busy...] Thanks for answering anyway! > On Mon, 13 Feb 2023 15:06:26 +0000, Ulrich Sibiller via samba > OK, this part looks kind-of good. It would be interesting to know how > long the entire failover process is taking. What exactly would you define as the begin and end of the
2023 Feb 15
1
ctdb tcp kill: remaining connections
Hi Uli, [Sorry for slow response, life is busy...] On Mon, 13 Feb 2023 15:06:26 +0000, Ulrich Sibiller via samba <samba at lists.samba.org> wrote: > we are using ctdb 4.15.5 on RHEL8 (Kernel > 4.18.0-372.32.1.el8_6.x86_64) to provide NFS v3 (via tcp) to RHEL7/8 > clients. Whenever an ip takeover happens most clients report > something like this: > [Mon Feb 13 12:21:22
2019 May 16
2
CTDB node stucks in " ctdb-eventd[13184]: 50.samba: samba not listening on TCP port 445"
Hi everybody, I just updated my ctdb node from Samba version 4.9.4-SerNet-Debian-11.stretch to Samba version 4.9.8-SerNet-Debian-13.stretch. After restarting the sernet-samba-ctdbd service the node doesn't come back and remains in state "UNHEALTHY". I can find that in the syslog: May 16 11:25:40 ctdb-lbn1 ctdb-eventd[13184]: 50.samba: samba not listening on TCP port 445 May 16
2016 Apr 07
1
Updating from 4.1 + CTDB to 4.2/CTDB?
Dear list, We are about to upgrade to Samba 4.2.9 from sernet-samba 4.1.6 + CTDB 1.0.114.7 running on top of GPFS. My understanding is that as of 4.2, CTDB is now part of Samba. So does this mean we need to uninstall all of our sernet-samba and ctdb RPMs (we are on CentOS 6), then install the sernet-samba 4.2.9 rpms and reconfigure everything for CTDB? For those who have done this, how
2019 Oct 04
0
CTDB and nfs-ganesha
Looks like this is the actual error: 2019/10/04 09:51:29.174870 ctdbd[17244]: Recovery has started 2019/10/04 09:51:29.174982 ctdbd[17244]: ../ctdb/server/ctdb_server.c:188 ctdb request 2147483554 of type 8 length 48 from node 1 to 0 2019/10/04 09:51:29.175021 ctdbd[17244]: Recovery lock configuration inconsistent: recmaster has NULL, this node has /run/gluster/shared_storage/.CTDB-lockfile,
2019 Oct 05
0
CTDB and nfs-ganesha
I?ll have to check out the script issue on Monday. You said the lock needs to be the same on all nodes. I can do that but this is now in production and restarting the ctdb service forces a failover of the ip, which actually causes a failure of a few of our Kubernetes sql database pods - they freak out and don?t recovery if storage is ripped out from under them. Is there a way to do this
2019 Jan 18
3
testparm: /var/run/ctdb/ctdb.socket missing
Apologies in advance, but I have been banging my head against this and the only Google results I've found are from 2014, and don't work (or apply). OS: Ubuntu 18.04 bionic smbd: 4.9.4-Debian (the apt.van-belle.nl version) When I run `testparm` I get: rlimit_max: increasing rlimit_max (8192) to minimum Windows limit (16384) WARNING: The "syslog" option is deprecated
2020 Mar 03
1
start/stop ctdb
On 03/03/2020 09:13, Ralph Boehme via samba wrote: > Am 3/3/20 um 10:05 AM schrieb Micha Ballmann via samba: >> Mar? 3 09:50:50 ctdb1 systemd[1]: Starting CTDB... >> Mar? 3 09:50:50 ctdb1 ctdbd[24663]: CTDB starting on node >> Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Starting CTDBD (Version 4.11.6) as >> PID: 24667 >> Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Created PID file
2014 Jul 08
1
smbd does not start under ctdb
Hi 2 node drbd cluster with ocfs2. both nodes: openSUSE 4.1.9 with drbd 8.4 and ctdbd 2.3 All seems OK with ctdb: n1: ctdb status Number of nodes:2 pnn:0 192.168.0.10 OK (THIS NODE) pnn:1 192.168.0.11 OK Generation:1187222392 Size:2 hash:0 lmaster:0 hash:1 lmaster:1 Recovery mode:NORMAL (0) Recovery master:0 n2: ctdb status Number of nodes:2 pnn:0 192.168.0.10 OK pnn:1 192.168.0.11
2012 Oct 31
1
[Announce] CTDB release 2.0 is ready for download
This is long overdue CTDB release. There have been numerous code enhancements and bug fixes since the last release of CTDB. Highlights ======= * Support for readonly records (http://ctdb.samba.org/doc/readonlyrecords.txt) * Locking API to detect deadlocks between ctdb and samba * Fetch-lock optimization to rate-limit concurrent requests for same record * Support for policy routing * Modified IP
2019 Jul 09
3
CTDB Samba 4.10 example?
Hello, from SAMBA 4.9/4.10 there was a big change regarding CTDB. Does anyone have an example configuration, how to configure a CTDB cluster from version 4.10? Is there someone who run CTDB with SAMBA 4.10? Best regards Micha
2019 Jul 09
1
CTDB Samba 4.10 example?
Hi there, I use Samba 4.9 since its releaseday and also had some troubles first due to the lack of informations in the wiki. Anyways... after some try & error i got it working and since i run the Cluster (non-production environment) had no errors except CTDBs monitoring of the Samba process did'nt work as i would expect it (RPC-Server was dead but CTDB did'nt complain and "ctdb
2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
Hi everybody, I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not understand some points. It is possible to run the CTDB defining it under services section in cluster.conf but running it on the second node shuts down the process at the first one. My CTDB configuration implies 2 active-active nodes. Does CTDB care if the node starts with clean_start="0" or
2015 Aug 04
3
Does CTDB run under LXC containers?
We're transitioning from a VM based environment to one that uses LXC based containers running under CentOS 7. CTDB runs fine under our CentOS 7 VMs. The same packages running under LXC however seem to have issues: # systemctl start ctdb.service Job for ctdb.service failed. See 'systemctl status ctdb.service' and 'journalctl -xn' for details. # systemctl status ctdb.service
2015 Aug 04
1
Does CTDB run under LXC containers?
I'm using libvirt_lxc and that has an XML based configuration. Based on what I've read, I think I need to add this to the ctdb container's config: <features> <capabilities policy='default'> <sys_nice state='on'/> </capabilities> </features> That didn't do the trick though. I need to figure out how to turn on all caps to