similar to: [Announce] CTDB release 2.5.5 is ready for download

Displaying 20 results from an estimated 7000 matches similar to: "[Announce] CTDB release 2.5.5 is ready for download"

2014 Mar 31
0
ctdb issue: existing header for db_id 0xf2a58948 has larger RSN 1 than new RSN 1 in ctdb_persistent_store
Hello I find the following email form internet, and I have the same problem, can you share your information about this issue? [Samba] ctdb issue: existing header for db_id 0xf2a58948 has larger RSN 1 than new RSN 1 in ctdb_persistent_store Nate Hardt nate at scalecomputing.com
2012 Oct 31
1
[Announce] CTDB release 2.0 is ready for download
This is long overdue CTDB release. There have been numerous code enhancements and bug fixes since the last release of CTDB. Highlights ======= * Support for readonly records (http://ctdb.samba.org/doc/readonlyrecords.txt) * Locking API to detect deadlocks between ctdb and samba * Fetch-lock optimization to rate-limit concurrent requests for same record * Support for policy routing * Modified IP
2014 Sep 26
0
[Announce] CTDB release 2.5.4 is ready for download
This is the latest stable release of CTDB. CTDB 2.5.4 can be used with Samba releases 3.6.x, 4.0.x and 4.1.x. Changes in CTDB 2.5.4 ===================== User-visible changes -------------------- * New command "ctdb detach" to detach a database. * Support for TDB robust mutexes. To enable set TDBMutexEnabled=1. The setting is per node. * New manual page ctdb-statistics.7.
2010 Oct 19
0
CTDB starting statd without -n gfs -H /etc/ctdb/statd-callout
Hello, First and foremost, thanks *very* much for ctdb. It's a joy to use after banging around with other HA solutions. We're planning to use it to export Samba and NFS shares throughout campus. I'm having one problem with the NFS part though. When ctdbd first starts statd (we're using CTDB_MANAGES_NFS=yes), it does so without appending the stuff in the STATD_HOSTNAME variable
2014 Feb 26
0
CTDB Debug Help
Hello, I've got a two node CTDB/Samba cluster that I'm having trouble with trying to add back a node after having to do an OS reload on it. The servers are running CTDB 2.5.1 and Samba 4.1.4 on AIX 7.1 TL2. The Samba CTDB databases and Samba service work fine from the node that was not reloaded. The rebuilt node is failing to re-add itself to the cluster. I'm looking for
2017 Apr 20
0
CTDB problems
On Wed, 19 Apr 2017 12:55:45 +0100, Alex Crow via samba <samba at lists.samba.org> wrote: > This morning our CTDB managed cluster took a nosedive. We had member > machines with hung smbd tasks which causes them to reboot, and the > cluster did not come back up consistently. We eventually got it more or > less stable with two nodes out of the 3, but we're still seeing
2024 Oct 16
1
ctdb tcp settings for statd failover
Hi Ulrich, On Tue, 15 Oct 2024 15:22:51 +0000, Ulrich Sibiller via samba <samba at lists.samba.org> wrote: > In current (6140c3177a0330f42411618c3fca28930ea02a21) samba's > ctdb/tools/statd_callout_helper I find this comment: > > notify) > ... > # we need these settings to make sure that no tcp connections > survive # across a very fast failover/failback
2014 Dec 12
0
Intermittent Event Script Timeouts on CTDB Cluster Nodes
Hi All, I've got a CTDB cluster, managing NFSv3 and Samba, sitting in front of a GPFS storage cluster. The NFSv3 piece is carrying some pretty heavy traffic at peak load. About once every three to four days, CTDB has been exhibiting behaviors that result in IP-failover between two nodes for reasons that are currently unknown. The exact chain of events has been a little different each time
2014 Jul 03
0
ctdb split brain nodes doesn't see each other
Hi, I?ve setup a simple ctdb cluster. Actually copied the config file from an existing system. Thats what happens: Node 1, alone Number of nodes:2 pnn:0 10.0.0.1 OK (THIS NODE) pnn:1 10.0.0.2 DISCONNECTED|UNHEALTHY|INACTIVE Generation:1369816268 Size:1 hash:0 lmaster:0 Recovery mode:NORMAL (0) Recovery master:0 Node1, after start of ctdb on Node 2 Number of nodes:2 pnn:0
2017 Oct 27
0
ctdb vacuum timeouts and record locks
Hi Bob, On Thu, 26 Oct 2017 22:44:30 -0700, Computerisms Corporation via samba <samba at lists.samba.org> wrote: > I set up a ctdb cluster a couple months back. Things seemed pretty > solid for the first 2-3 weeks, but then I started getting reports of > people not being able to access files, or some times directories. It > has taken me a while to figure some stuff out,
2017 Oct 27
2
ctdb vacuum timeouts and record locks
Hi List, I set up a ctdb cluster a couple months back. Things seemed pretty solid for the first 2-3 weeks, but then I started getting reports of people not being able to access files, or some times directories. It has taken me a while to figure some stuff out, but it seems the common denominator to this happening is vacuuming timeouts for locking.tdb in the ctdb log, which might go on
2013 Nov 27
0
[Announce] CTDB 2.5.1 available for download
Hi, Since CTDB tree has been merged in Samba tree, any new CTDB development would be done in Samba tree. Till combined Samba+CTDB is released, CTDB fixes would be released as minor releases starting with 2.5.1. Amitay. Changes in CTDB 2.5.1 ===================== Important bug fixes ------------------- * The locking code now correctly implements a per-database active locks limit. Whole
2012 Oct 24
2
Why portmap is needed for NFSv4 in CentOS6
Hi all, I have setup a CentOS6.3 x86_64 host to act as a nfs server. According to RHEL6 docs, portmap is not needed when you use NFSv4, but in my host I need to start rpcbind service to make NFSv4 works. My /etc/sysconfig/nfs # # Define which protocol versions mountd # will advertise. The values are "no" or "yes" # with yes being the default MOUNTD_NFS_V2="no"
2012 Apr 17
0
CTDB panics when vacuuming serverid.tdb
CTDB Samba Team, I have a two-node cluster successfully running a GFS2 filesystem. I compiled ctdb ver 1.12 with Samba 3.6.3 for 64-bit systems. Running on RHEL 5.7. I was able to add the cluster to the domain but after I restarted CTDB, it panics right after doing a vacuum of the serverid.tdb database. The lock file is on the GFS FS so both nodes can access it. Any ideas as to what
2017 Apr 19
6
CTDB problems
Hi, This morning our CTDB managed cluster took a nosedive. We had member machines with hung smbd tasks which causes them to reboot, and the cluster did not come back up consistently. We eventually got it more or less stable with two nodes out of the 3, but we're still seeing worrying messages, eg we've just noticed: 2017/04/19 12:10:31.168891 [ 5417]: Vacuuming child process timed
2023 Feb 16
1
ctdb tcp kill: remaining connections
Martin Schwenke schrieb am 15.02.2023 23:23: > Hi Uli, > > [Sorry for slow response, life is busy...] Thanks for answering anyway! > On Mon, 13 Feb 2023 15:06:26 +0000, Ulrich Sibiller via samba > OK, this part looks kind-of good. It would be interesting to know how > long the entire failover process is taking. What exactly would you define as the begin and end of the
2011 May 31
1
Unable to mount Centos 5.6 Server via nfs4 - Operation Not Permitted - MADNESS!
After getting a reasonably configured NFS4 setup working on my Scientific Linux server, I spent a majority of my evening trying to do the same with my Centos 5 box, with fruitless results. Most attempts to mount that server returns the following message: [root at sl01 log]# mount -t nfs4 192.168.15.200:/opt/company_data /mnt mount.nfs4: Operation not permitted As nearest as I can tell, I was
2024 Oct 15
1
ctdb tcp settings for statd failover
Hi, In current (6140c3177a0330f42411618c3fca28930ea02a21) samba's ctdb/tools/statd_callout_helper I find this comment: notify) ... # we need these settings to make sure that no tcp connections survive # across a very fast failover/failback #echo 10 > /proc/sys/net/ipv4/tcp_fin_timeout #echo 0 > /proc/sys/net/ipv4/tcp_max_tw_buckets #echo 0 >
2014 Apr 02
0
[Announce] CTDB 2.5.3 available for download
Changes in CTDB 2.5.3 ===================== User-visible changes -------------------- * New configuration variable CTDB_NATGW_STATIC_ROUTES allows NAT gateway feature to create static host/network routes instead of default routes. See the documentation. Use with care. Important bug fixes ------------------- * ctdbd no longer crashes when tickles are processed after reloading the nodes
2017 Nov 02
0
ctdb vacuum timeouts and record locks
Hi, This occurred again this morning, when the user reported the problem, I found in the ctdb logs that vacuuming has been going on since last night. The need to fix it was urgent (when isn't it?) so I didn't have time to poke around for clues, but immediately restarted the lxc container. But this time it wouldn't restart, which I had time to trace to a hung smbd process, and