similar to: ctdb vacuum timeouts and record locks

Displaying 20 results from an estimated 4000 matches similar to: "ctdb vacuum timeouts and record locks"

2017 Oct 27
0
ctdb vacuum timeouts and record locks
Hi Bob, On Thu, 26 Oct 2017 22:44:30 -0700, Computerisms Corporation via samba <samba at lists.samba.org> wrote: > I set up a ctdb cluster a couple months back. Things seemed pretty > solid for the first 2-3 weeks, but then I started getting reports of > people not being able to access files, or some times directories. It > has taken me a while to figure some stuff out,
2017 Oct 27
3
ctdb vacuum timeouts and record locks
Hi Martin, Thanks for reading and taking the time to reply >> ctdbd[89]: Unable to get RECORD lock on database locking.tdb for 20 seconds >> /usr/local/samba/etc/ctdb/debug_locks.sh: 142: >> /usr/local/samba/etc/ctdb/debug_locks.sh: cannot create : Directory >> nonexistent >> sh: echo: I/O error >> sh: echo: I/O error > > That's weird. The only
2017 Nov 06
2
ctdb vacuum timeouts and record locks
On Thu, 2 Nov 2017 11:17:27 -0700, Computerisms Corporation via samba <samba at lists.samba.org> wrote: > This occurred again this morning, when the user reported the problem, I > found in the ctdb logs that vacuuming has been going on since last > night. The need to fix it was urgent (when isn't it?) so I didn't have > time to poke around for clues, but immediately
2017 Nov 02
0
ctdb vacuum timeouts and record locks
Hi, This occurred again this morning, when the user reported the problem, I found in the ctdb logs that vacuuming has been going on since last night. The need to fix it was urgent (when isn't it?) so I didn't have time to poke around for clues, but immediately restarted the lxc container. But this time it wouldn't restart, which I had time to trace to a hung smbd process, and
2017 Nov 02
2
ctdb vacuum timeouts and record locks
hm, I stand correct on the problem solved statement below. Ip addresses are simply not cooperating on the 2nd node. root at vault1:~# ctdb ip Public IPs on node 0 192.168.120.90 0 192.168.120.91 0 192.168.120.92 0 192.168.120.93 0 root at vault2:/service/ctdb/log/main# ctdb ip Public IPs on node 1 192.168.120.90 0 192.168.120.91 0 192.168.120.92 0 192.168.120.93 0 root at
2017 Apr 19
6
CTDB problems
Hi, This morning our CTDB managed cluster took a nosedive. We had member machines with hung smbd tasks which causes them to reboot, and the cluster did not come back up consistently. We eventually got it more or less stable with two nodes out of the 3, but we're still seeing worrying messages, eg we've just noticed: 2017/04/19 12:10:31.168891 [ 5417]: Vacuuming child process timed
2014 Oct 29
1
smbstatus hang with CTDB 2.5.4 and Samba 4.1.13
Can anyone help with some pointers to debug a problem with Samba and CTDB with smbstatus traversing the connections tdb? I've got a new two node cluster with Samba and CTDB on AIX. If I run smbstatus when the server has much user activity it hangs and the node it was run on gets banned. I see the following in the ctdb log: 2014/10/29 11:12:45.374580 [3932342]:
2019 Aug 23
2
plenty of vacuuuming processes
Hi, I have a ctdb cluster with 3 nodes and 3 glusterfs (version 6) nodes up and running. I observe plenty of these situations: A connected Windows-10 client doesn't react anymore. I use forder redirections.? - Smbstatus shows up some (auth in progress) processes. - In the logs of a ctdb node I get: Aug 23 10:12:29 ctdb-1 ctdbd[2167]: Ending traverse on DB locking.tdb (id 568831), records
2018 Feb 26
2
答复: [ctdb] Unable to take recovery lock - contention
------------------原始邮件------------------ 发件人:朱尚忠10137461 收件人:samba@lists.samba.org <samba@lists.samba.org> 日 期 :2018年02月26日 17:10 主 题 :[ctdb] Unable to take recovery lock - contention When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The
2018 Feb 26
2
[ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The following the ctdb logs: 2018/02/12 19:38:51.147959 ctdbd[5615]: CTDB starting on node 2018/02/12 19:38:51.528921 ctdbd[6602]: Starting CTDBD (Version 4.6.10) as PID: 6602 2018/02/12 19:38:51.529060
2017 Nov 15
0
ctdb vacuum timeouts and record locks
Hi Martin, well, it has been over a week since my last hung process, but got another one today... >> So, not sure how to determine if this is a gluster problem, an lxc >> problem, or a ctdb/smbd problem. Thoughts/suggestions are welcome... > > You need a stack trace of the stuck smbd process. If it is wedged in a > system call on the cluster filesystem then you can blame
2017 Nov 15
1
ctdb vacuum timeouts and record locks
On Tue, 14 Nov 2017 22:48:57 -0800, Computerisms Corporation via samba <samba at lists.samba.org> wrote: > well, it has been over a week since my last hung process, but got > another one today... > >> So, not sure how to determine if this is a gluster problem, an lxc > >> problem, or a ctdb/smbd problem. Thoughts/suggestions are welcome... > > > >
2018 May 04
2
CTDB Path
Hello, at this time i want to install a CTDB Cluster with SAMBA 4.7.7 from SOURCE! I compiled samba as follow: |./configure| |--with-cluster-support ||--with-shared-modules=idmap_rid,idmap_tdb2,idmap_ad| The whole SAMBA enviroment is located in /usr/local/samba/. CTDB is located in /usr/local/samba/etc/ctdb. I guess right that the correct path of ctdbd.conf (node file, public address file
2018 May 07
2
CTDB Path
Hello, i'm still trying to find out what is the right path for ctdb.conf (ubuntu 18.04, samba was compiled from source!!). When im trying to start CTDB without any config file, my log in /usr/local/samba/var/log/log.ctdb shows me: 2018/05/07 12:56:44.363513 ctdbd[4503]: Failed to read nodes file "/usr/local/samba/etc/ctdb/nodes" 2018/05/07 12:56:44.363546 ctdbd[4503]: Failed to
2017 Apr 19
1
CTDB problems
On Wed, 19 Apr 2017 18:06:35 +0200, David Disseldorp via samba wrote: > > 2017/04/19 10:40:31.294250 [ 7423]: /etc/ctdb/debug_locks.sh: line 73: > > gstack: command not found > > This script attempts to dump the stack trace of the blocked process, > but can't as gstack isn't installed - it should be available in the > gdb package. > > @Martin: would the
2017 Nov 08
1
ctdb vacuum timeouts and record locks
Hi Martin, Thanks for your answer... >> I am using the 10.external. ip addr show shows the correct IP addresses >> on eth0 in the lxc container. rebooted the physical machine, this node >> is buggered. shut it down, used ip addr add to put the addresses on the >> other node, used ctdb addip and the node took it and node1 is now >> functioning with all 4 IPs just
2020 Mar 03
5
start/stop ctdb
Hi, i updated the variables for my scenario. But CTDB wont start: Mar? 3 09:50:50 ctdb1 systemd[1]: Starting CTDB... Mar? 3 09:50:50 ctdb1 ctdbd[24663]: CTDB starting on node Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Starting CTDBD (Version 4.11.6) as PID: 24667 Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Created PID file /usr/local/samba/var/run/ctdb/ctdbd.pid Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Removed
2019 Jan 18
3
testparm: /var/run/ctdb/ctdb.socket missing
Apologies in advance, but I have been banging my head against this and the only Google results I've found are from 2014, and don't work (or apply). OS: Ubuntu 18.04 bionic smbd: 4.9.4-Debian (the apt.van-belle.nl version) When I run `testparm` I get: rlimit_max: increasing rlimit_max (8192) to minimum Windows limit (16384) WARNING: The "syslog" option is deprecated
2020 Mar 03
3
start/stop ctdb
Hi, i configured a running three node CTDB SAMBA cluster (hope so). Virtual ip's are floating between the nodes after faling one of them. Im using SAMBA 4.11.6 on Ubuntu 18.04. Samba was compiled from source. I configured some systemD start/stop scripts for samba (smbd, nmbd) and winbind, also disabled them for manage via ctdb. I enabled ctdb to mange samba and winbind via this
2015 Aug 04
3
Does CTDB run under LXC containers?
We're transitioning from a VM based environment to one that uses LXC based containers running under CentOS 7. CTDB runs fine under our CentOS 7 VMs. The same packages running under LXC however seem to have issues: # systemctl start ctdb.service Job for ctdb.service failed. See 'systemctl status ctdb.service' and 'journalctl -xn' for details. # systemctl status ctdb.service