similar to: [Announce] CTDB 2.2 available for download

Displaying 20 results from an estimated 1200 matches similar to: "[Announce] CTDB 2.2 available for download"

2013 Oct 30
0
[Announce] CTDB 2.5 available for download
Changes in CTDB 2.5 =================== User-visible changes -------------------- * The default location of the ctdbd socket is now: /var/run/ctdb/ctdbd.socket If you currently set CTDB_SOCKET in configuration then unsetting it will probably do what you want. * The default location of CTDB TDB databases is now: /var/lib/ctdb If you only set CTDB_DBDIR (to the old default of
2017 Nov 06
0
ctdb vacuum timeouts and record locks
On Thu, 2 Nov 2017 12:17:56 -0700, Computerisms Corporation via samba <samba at lists.samba.org> wrote: > hm, I stand correct on the problem solved statement below. Ip addresses > are simply not cooperating on the 2nd node. > > root at vault1:~# ctdb ip > Public IPs on node 0 > 192.168.120.90 0 > 192.168.120.91 0 > 192.168.120.92 0 > 192.168.120.93 0 > >
2013 Apr 09
0
Failed to start CTDB first time after install
Hi, I am setting up a two node Samba cluster with CTDB in AWS in two different subnets. All IP ports are open between these two subnets. I am initially forming the Samba cluster with one node, then will add the second node after startup of CTDB. I am not using public_addresses for CTDB due to AWS not supporting VIP's. I am using 64bit Amazon Linux with two NICs defined, eth0 as the
2017 Nov 08
1
ctdb vacuum timeouts and record locks
Hi Martin, Thanks for your answer... >> I am using the 10.external. ip addr show shows the correct IP addresses >> on eth0 in the lxc container. rebooted the physical machine, this node >> is buggered. shut it down, used ip addr add to put the addresses on the >> other node, used ctdb addip and the node took it and node1 is now >> functioning with all 4 IPs just
2012 May 24
0
Could not find node to take over public address
Hi, we run ctdb with samba on SLES11. It was running for some month ok but after an update of the system and ctdb it fails to run. I tried to setup a new ctdb setup on two other nodes and it still fails with the same error. After startup the status is: > ctdb status > Number of nodes:2 > pnn:0 10.94.43.7 DISABLED > pnn:1 10.94.43.8 DISABLED (THIS NODE) >
2016 Nov 10
1
CTDB IP takeover/failover tunables - do you use them?
I'm currently hacking on CTDB's IP takeover/failover code. For Samba 4.6, I would like to rationalise the IP takeover-related tunable parameters. I would like to know if there are any users who set the values of these tunables to non-default values. The tunables in question are: DisableIPFailover Default: 0 When set to non-zero, ctdb will not perform failover or
2014 Feb 26
0
CTDB Debug Help
Hello, I've got a two node CTDB/Samba cluster that I'm having trouble with trying to add back a node after having to do an OS reload on it. The servers are running CTDB 2.5.1 and Samba 4.1.4 on AIX 7.1 TL2. The Samba CTDB databases and Samba service work fine from the node that was not reloaded. The rebuilt node is failing to re-add itself to the cluster. I'm looking for
2017 Nov 02
2
ctdb vacuum timeouts and record locks
hm, I stand correct on the problem solved statement below. Ip addresses are simply not cooperating on the 2nd node. root at vault1:~# ctdb ip Public IPs on node 0 192.168.120.90 0 192.168.120.91 0 192.168.120.92 0 192.168.120.93 0 root at vault2:/service/ctdb/log/main# ctdb ip Public IPs on node 1 192.168.120.90 0 192.168.120.91 0 192.168.120.92 0 192.168.120.93 0 root at
2018 Feb 26
0
答复: [ctdb] Unable to take recovery lock - contention
Am Montag, 26. Februar 2018, 17:26:06 CET schrieb zhu.shangzhong--- via samba: Decoded base64 encoded body with some chinese characters: ------------------原始邮件------------------ 发件人:朱尚忠10137461 收件人:samba at lists.samba.org <samba at lists.samba.org> 日 期 :2018年02月26日 17:10 主 题 :[ctdb] Unable to take recovery lock - contention When the ctdb is starting, the "Unable to take recovery lock
2018 Feb 26
0
[ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The following the ctdb logs: 2018/02/12 19:38:51.147959 ctdbd[5615]: CTDB starting on node 2018/02/12 19:38:51.528921 ctdbd[6602]: Starting CTDBD (Version 4.6.10) as PID: 6602 2018/02/12 19:38:51.529060
2018 Feb 26
2
[ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The following the ctdb logs: 2018/02/12 19:38:51.147959 ctdbd[5615]: CTDB starting on node 2018/02/12 19:38:51.528921 ctdbd[6602]: Starting CTDBD (Version 4.6.10) as PID: 6602 2018/02/12 19:38:51.529060
2020 Mar 03
2
start/stop ctdb
Thanks, i added "Environment=PATH=$PATH:/usr/local/samba/bin:/bin". Needed also to add "/bin" because couldnt find "sleep". Thats the script now: ---- [Unit] Description=CTDB Documentation=man:ctdbd(1) man:ctdb(7) After=network-online.target time-sync.target ConditionFileNotEmpty=/usr/local/samba/etc/ctdb/nodes [Service]
2018 May 07
2
CTDB Path
Hello, i'm still trying to find out what is the right path for ctdb.conf (ubuntu 18.04, samba was compiled from source!!). When im trying to start CTDB without any config file, my log in /usr/local/samba/var/log/log.ctdb shows me: 2018/05/07 12:56:44.363513 ctdbd[4503]: Failed to read nodes file "/usr/local/samba/etc/ctdb/nodes" 2018/05/07 12:56:44.363546 ctdbd[4503]: Failed to
2020 Mar 03
1
start/stop ctdb
On 03/03/2020 09:13, Ralph Boehme via samba wrote: > Am 3/3/20 um 10:05 AM schrieb Micha Ballmann via samba: >> Mar? 3 09:50:50 ctdb1 systemd[1]: Starting CTDB... >> Mar? 3 09:50:50 ctdb1 ctdbd[24663]: CTDB starting on node >> Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Starting CTDBD (Version 4.11.6) as >> PID: 24667 >> Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Created PID file
2020 Mar 03
1
start/stop ctdb
Hai, Hmm, about .. > "Environment=PATH=$PATH:/usr/local/samba/bin:/bin". > Needed also > to add "/bin" because couldnt find "sleep". /bin should be found in $PATH.. In since it looks like it does not. Then i suggest this: Environment="PATH=$PATH:/usr/local/samba/bin:/usr/local/samba/sbin:/bin:/sbin:/usr/bin:/usr/sbin" # to make sure you
2019 Apr 19
0
faI2ban detecting and banning but nothing happens
> I've added a fail regex to /etc/fail2ban/filter.d/exim.conf as suggested on > another page: The standard exim.conf already has a 535 filter. Was that not working for you? > > \[<HOST>\]: 535 Incorrect authentication data > > which appears to be successfully matchnig lines in /var/log/exim/mail.log such > as > > 2019-04-19 13:06:10 dovecot_plain
2018 Feb 26
2
答复: [ctdb] Unable to take recovery lock - contention
------------------原始邮件------------------ 发件人:朱尚忠10137461 收件人:samba@lists.samba.org <samba@lists.samba.org> 日 期 :2018年02月26日 17:10 主 题 :[ctdb] Unable to take recovery lock - contention When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The
2019 Oct 04
0
CTDB and nfs-ganesha
Looks like this is the actual error: 2019/10/04 09:51:29.174870 ctdbd[17244]: Recovery has started 2019/10/04 09:51:29.174982 ctdbd[17244]: ../ctdb/server/ctdb_server.c:188 ctdb request 2147483554 of type 8 length 48 from node 1 to 0 2019/10/04 09:51:29.175021 ctdbd[17244]: Recovery lock configuration inconsistent: recmaster has NULL, this node has /run/gluster/shared_storage/.CTDB-lockfile,
2020 Mar 03
0
start/stop ctdb
Ah, yes, thats it Ralph.. Good one. Then in the systemd.service file you need to add. [Service] Environment=PATH=$PATH:/usr/local/samba/bin > -----Oorspronkelijk bericht----- > Van: samba [mailto:samba-bounces at lists.samba.org] Namens > Ralph Boehme via samba > Verzonden: dinsdag 3 maart 2020 10:13 > Aan: Micha Ballmann; Anoop C S > CC: samba at lists.samba.org >
2019 Oct 05
0
CTDB and nfs-ganesha
I?ll have to check out the script issue on Monday. You said the lock needs to be the same on all nodes. I can do that but this is now in production and restarting the ctdb service forces a failover of the ip, which actually causes a failure of a few of our Kubernetes sql database pods - they freak out and don?t recovery if storage is ripped out from under them. Is there a way to do this