similar to: [Announce] CTDB 2.5.3 available for download

Displaying 20 results from an estimated 1000 matches similar to: "[Announce] CTDB 2.5.3 available for download"

2013 Jul 11
0
[Announce] CTDB 2.3 available for download
Changes in CTDB 2.3 =================== User-visible changes -------------------- * 2 new configuration variables for 60.nfs eventscript: - CTDB_MONITOR_NFS_THREAD_COUNT - CTDB_NFS_DUMP_STUCK_THREADS See ctdb.sysconfig for details. * Removed DeadlockTimeout tunable. To enable debug of locking issues set CTDB_DEBUG_LOCKS=/etc/ctdb/debug_locks.sh * In overall statistics and database
2013 Oct 30
0
[Announce] CTDB 2.5 available for download
Changes in CTDB 2.5 =================== User-visible changes -------------------- * The default location of the ctdbd socket is now: /var/run/ctdb/ctdbd.socket If you currently set CTDB_SOCKET in configuration then unsetting it will probably do what you want. * The default location of CTDB TDB databases is now: /var/lib/ctdb If you only set CTDB_DBDIR (to the old default of
2014 Feb 26
0
CTDB Debug Help
Hello, I've got a two node CTDB/Samba cluster that I'm having trouble with trying to add back a node after having to do an OS reload on it. The servers are running CTDB 2.5.1 and Samba 4.1.4 on AIX 7.1 TL2. The Samba CTDB databases and Samba service work fine from the node that was not reloaded. The rebuilt node is failing to re-add itself to the cluster. I'm looking for
2014 Jul 03
0
ctdb split brain nodes doesn't see each other
Hi, I?ve setup a simple ctdb cluster. Actually copied the config file from an existing system. Thats what happens: Node 1, alone Number of nodes:2 pnn:0 10.0.0.1 OK (THIS NODE) pnn:1 10.0.0.2 DISCONNECTED|UNHEALTHY|INACTIVE Generation:1369816268 Size:1 hash:0 lmaster:0 Recovery mode:NORMAL (0) Recovery master:0 Node1, after start of ctdb on Node 2 Number of nodes:2 pnn:0
2014 Jul 21
0
CTDB no secrets.tdb created
Hi 2 node ctdb 2.5.3 on Ubuntu 14.04 nodes apparmor teardown and firewall and stopped dead The IP takeover is working fine between the nodes: Jul 21 14:12:03 uc1 ctdbd: recoverd:Trigger takeoverrun Jul 21 14:12:03 uc1 ctdbd: recoverd:Takeover run starting Jul 21 14:12:04 uc1 ctdbd: Takeover of IP 192.168.1.81/24 on interface bond0 Jul 21 14:12:04 uc1 ctdbd: Takeover of IP 192.168.1.80/24 on
2017 Oct 27
0
ctdb vacuum timeouts and record locks
Hi Bob, On Thu, 26 Oct 2017 22:44:30 -0700, Computerisms Corporation via samba <samba at lists.samba.org> wrote: > I set up a ctdb cluster a couple months back. Things seemed pretty > solid for the first 2-3 weeks, but then I started getting reports of > people not being able to access files, or some times directories. It > has taken me a while to figure some stuff out,
2017 Oct 27
2
ctdb vacuum timeouts and record locks
Hi List, I set up a ctdb cluster a couple months back. Things seemed pretty solid for the first 2-3 weeks, but then I started getting reports of people not being able to access files, or some times directories. It has taken me a while to figure some stuff out, but it seems the common denominator to this happening is vacuuming timeouts for locking.tdb in the ctdb log, which might go on
2014 Mar 31
0
ctdb issue: existing header for db_id 0xf2a58948 has larger RSN 1 than new RSN 1 in ctdb_persistent_store
Hello I find the following email form internet, and I have the same problem, can you share your information about this issue? [Samba] ctdb issue: existing header for db_id 0xf2a58948 has larger RSN 1 than new RSN 1 in ctdb_persistent_store Nate Hardt nate at scalecomputing.com
2013 Aug 23
0
[Announce] CTDB 2.4 available for download
Changes in CTDB 2.4 =================== User-visible changes -------------------- * A missing network interface now causes monitoring to fail and the node to become unhealthy. * Changed ctdb command's default control timeout from 3s to 10s. * debug-hung-script.sh now includes the output of "ctdb scriptstatus" to provide more information. Important bug fixes
2013 Sep 04
0
Recommended CTDB release for new installation
Hi folks, We're building a new CTDB cluster - any recommendation on what version of CTDB is the most "production" at the moment? We're currently on 1.2.66 on our currently in-service clusters. Cheers, Orlando > -------- Original Message -------- > Subject: [Samba] [Announce] CTDB 2.4 available for download > Date: Fri, 23 Aug 2013 14:11:29 +1000 > From:
2018 Feb 26
0
答复: [ctdb] Unable to take recovery lock - contention
Am Montag, 26. Februar 2018, 17:26:06 CET schrieb zhu.shangzhong--- via samba: Decoded base64 encoded body with some chinese characters: ------------------原始邮件------------------ 发件人:朱尚忠10137461 收件人:samba at lists.samba.org <samba at lists.samba.org> 日 期 :2018年02月26日 17:10 主 题 :[ctdb] Unable to take recovery lock - contention When the ctdb is starting, the "Unable to take recovery lock
2018 Feb 26
0
[ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The following the ctdb logs: 2018/02/12 19:38:51.147959 ctdbd[5615]: CTDB starting on node 2018/02/12 19:38:51.528921 ctdbd[6602]: Starting CTDBD (Version 4.6.10) as PID: 6602 2018/02/12 19:38:51.529060
2019 Oct 05
2
CTDB and nfs-ganesha
Hi Max, On Fri, 4 Oct 2019 14:01:22 +0000, Max DiOrio <Max.DiOrio at ieeeglobalspec.com> wrote: > Looks like this is the actual error: > > 2019/10/04 09:51:29.174870 ctdbd[17244]: Recovery has started > 2019/10/04 09:51:29.174982 ctdbd[17244]: ../ctdb/server/ctdb_server.c:188 ctdb request 2147483554 of type 8 length 48 from node 1 to 0 > 2019/10/04 09:51:29.175021
2012 Apr 17
0
CTDB panics when vacuuming serverid.tdb
CTDB Samba Team, I have a two-node cluster successfully running a GFS2 filesystem. I compiled ctdb ver 1.12 with Samba 3.6.3 for 64-bit systems. Running on RHEL 5.7. I was able to add the cluster to the domain but after I restarted CTDB, it panics right after doing a vacuum of the serverid.tdb database. The lock file is on the GFS FS so both nodes can access it. Any ideas as to what
2007 Jul 23
0
ControlPersist + IdleTimeout
Hi there, So I created a patch that makes ssh behave more like sudo. You connect to a host typing your password, you quit, you connect again and you are let in immediately. If you wait for too long you have to type your password again. It works if you have a ControlPath, ControlMaster is auto, ControlPersist is yes and ControlTimeout is for example 5m. This will make a master when you
2014 Aug 16
1
CTDB: Failed to connect client socket to daemon.
Ubuntu 14.04, ctdb 2.5.3, samba 4.1.11. CTDB is working with IP takeover between the 2 nodes. The machine is joined to the domain. Any help with the following errors would be most gratefully received. 1. connect to socket error: ctdb status 2014/08/16 15:32:03.248034 [23255]: client/ctdb_client.c:267 Failed to connect client socket to daemon. Errno:Connection refused(111) common/cmdline.c:156
2018 Feb 26
2
[ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The following the ctdb logs: 2018/02/12 19:38:51.147959 ctdbd[5615]: CTDB starting on node 2018/02/12 19:38:51.528921 ctdbd[6602]: Starting CTDBD (Version 4.6.10) as PID: 6602 2018/02/12 19:38:51.529060
2017 Nov 02
0
ctdb vacuum timeouts and record locks
Hi, This occurred again this morning, when the user reported the problem, I found in the ctdb logs that vacuuming has been going on since last night. The need to fix it was urgent (when isn't it?) so I didn't have time to poke around for clues, but immediately restarted the lxc container. But this time it wouldn't restart, which I had time to trace to a hung smbd process, and
2015 Apr 13
0
[Announce] CTDB release 2.5.5 is ready for download
This is the latest stable release of CTDB. CTDB 2.5.5 can be used with Samba releases prior to Samba 4.2.x (i.e. Samba releases 3.6.x, 4.0.x and 4.1.x). Changes in CTDB 2.5.5 ===================== User-visible changes -------------------- * Dump stack traces for hung RPC processes (mountd, rquotad, statd) * Add vaccuming latency to database statistics * Add -X option to ctdb tool that uses
2018 Feb 26
2
答复: [ctdb] Unable to take recovery lock - contention
------------------原始邮件------------------ 发件人:朱尚忠10137461 收件人:samba@lists.samba.org <samba@lists.samba.org> 日 期 :2018年02月26日 17:10 主 题 :[ctdb] Unable to take recovery lock - contention When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The