similar to: migrate samba domain controller to CTDB

Displaying 20 results from an estimated 6000 matches similar to: "migrate samba domain controller to CTDB"

2017 Sep 19
1
tinc 1.0 syslog dump explanation
Hello, I failed to find any explanation about node statuses in syslog dump. Could you please enlight what these status codes mean and how to interpret these? Sep 19 07:08:26 ip-10-255-1-200 tinc.routers[20543]: 10_254_5_11 at 10.255.5.11 port 58045 options c socket 7 status 01c2 outbuf 157/0/0 Sep 19 07:08:26 ip-10-255-1-200 tinc.routers[20543]: 10_254_3_113 at 10.255.3.113 port 58233 options c
2019 Jan 18
3
testparm: /var/run/ctdb/ctdb.socket missing
Apologies in advance, but I have been banging my head against this and the only Google results I've found are from 2014, and don't work (or apply). OS: Ubuntu 18.04 bionic smbd: 4.9.4-Debian (the apt.van-belle.nl version) When I run `testparm` I get: rlimit_max: increasing rlimit_max (8192) to minimum Windows limit (16384) WARNING: The "syslog" option is deprecated
2020 Mar 03
1
start/stop ctdb
On 03/03/2020 09:13, Ralph Boehme via samba wrote: > Am 3/3/20 um 10:05 AM schrieb Micha Ballmann via samba: >> Mar? 3 09:50:50 ctdb1 systemd[1]: Starting CTDB... >> Mar? 3 09:50:50 ctdb1 ctdbd[24663]: CTDB starting on node >> Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Starting CTDBD (Version 4.11.6) as >> PID: 24667 >> Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Created PID file
2015 Aug 04
3
Does CTDB run under LXC containers?
We're transitioning from a VM based environment to one that uses LXC based containers running under CentOS 7. CTDB runs fine under our CentOS 7 VMs. The same packages running under LXC however seem to have issues: # systemctl start ctdb.service Job for ctdb.service failed. See 'systemctl status ctdb.service' and 'journalctl -xn' for details. # systemctl status ctdb.service
2015 Aug 04
1
Does CTDB run under LXC containers?
I'm using libvirt_lxc and that has an XML based configuration. Based on what I've read, I think I need to add this to the ctdb container's config: <features> <capabilities policy='default'> <sys_nice state='on'/> </capabilities> </features> That didn't do the trick though. I need to figure out how to turn on all caps to
2018 Feb 26
2
[ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The following the ctdb logs: 2018/02/12 19:38:51.147959 ctdbd[5615]: CTDB starting on node 2018/02/12 19:38:51.528921 ctdbd[6602]: Starting CTDBD (Version 4.6.10) as PID: 6602 2018/02/12 19:38:51.529060
2019 Oct 03
2
CTDB and nfs-ganesha
Hi Max, On Wed, 2 Oct 2019 15:08:43 +0000, Max DiOrio <Max.DiOrio at ieeeglobalspec.com> wrote: > As soon as I made the configuration change and restarted CTDB, it crashes. > > Oct 2 11:05:14 hq-6pgluster01 systemd: Started CTDB. > Oct 2 11:05:21 hq-6pgluster01 systemd: ctdb.service: main process exited, code=exited, status=1/FAILURE > Oct 2 11:05:21 hq-6pgluster01
2018 May 04
2
CTDB Path
Hello, at this time i want to install a CTDB Cluster with SAMBA 4.7.7 from SOURCE! I compiled samba as follow: |./configure| |--with-cluster-support ||--with-shared-modules=idmap_rid,idmap_tdb2,idmap_ad| The whole SAMBA enviroment is located in /usr/local/samba/. CTDB is located in /usr/local/samba/etc/ctdb. I guess right that the correct path of ctdbd.conf (node file, public address file
2015 Aug 04
3
Does CTDB run under LXC containers?
On 2015-08-04 at 19:27 +0200, Ralph Böhme wrote: > Hi Peter, > > On Tue, Aug 04, 2015 at 10:11:56AM -0700, Peter Steele wrote: > > We're transitioning from a VM based environment to one that uses LXC based > > containers running under CentOS 7. CTDB runs fine under our CentOS 7 VMs. > > The same packages running under LXC however seem to have issues: > > >
2014 Aug 16
1
CTDB: Failed to connect client socket to daemon.
Ubuntu 14.04, ctdb 2.5.3, samba 4.1.11. CTDB is working with IP takeover between the 2 nodes. The machine is joined to the domain. Any help with the following errors would be most gratefully received. 1. connect to socket error: ctdb status 2014/08/16 15:32:03.248034 [23255]: client/ctdb_client.c:267 Failed to connect client socket to daemon. Errno:Connection refused(111) common/cmdline.c:156
2020 Mar 03
2
start/stop ctdb
Thanks, i added "Environment=PATH=$PATH:/usr/local/samba/bin:/bin". Needed also to add "/bin" because couldnt find "sleep". Thats the script now: ---- [Unit] Description=CTDB Documentation=man:ctdbd(1) man:ctdb(7) After=network-online.target time-sync.target ConditionFileNotEmpty=/usr/local/samba/etc/ctdb/nodes [Service]
2020 Mar 03
3
start/stop ctdb
Hi, i configured a running three node CTDB SAMBA cluster (hope so). Virtual ip's are floating between the nodes after faling one of them. Im using SAMBA 4.11.6 on Ubuntu 18.04. Samba was compiled from source. I configured some systemD start/stop scripts for samba (smbd, nmbd) and winbind, also disabled them for manage via ctdb. I enabled ctdb to mange samba and winbind via this
2023 Jan 26
1
ctdb samba and winbind event problem
Hi to all, I'm having a CTDB-Cluster with two nodes (both Ubuntu wit Sernet-packages 4.17.4). Now I want to replace one of the nodes. The first step was to bring a new node to the CTDB-Cluster. This time a Debian 11 but with the same sernet-packages (4.17.4). Adding the new node to /etc/ctdb/nodes at the end of the list. And the virtual IP to /etc/ctdb/public_addresses, also at the end
2019 Oct 05
2
CTDB and nfs-ganesha
Hi Max, On Fri, 4 Oct 2019 14:01:22 +0000, Max DiOrio <Max.DiOrio at ieeeglobalspec.com> wrote: > Looks like this is the actual error: > > 2019/10/04 09:51:29.174870 ctdbd[17244]: Recovery has started > 2019/10/04 09:51:29.174982 ctdbd[17244]: ../ctdb/server/ctdb_server.c:188 ctdb request 2147483554 of type 8 length 48 from node 1 to 0 > 2019/10/04 09:51:29.175021
2020 Mar 03
5
start/stop ctdb
Hi, i updated the variables for my scenario. But CTDB wont start: Mar? 3 09:50:50 ctdb1 systemd[1]: Starting CTDB... Mar? 3 09:50:50 ctdb1 ctdbd[24663]: CTDB starting on node Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Starting CTDBD (Version 4.11.6) as PID: 24667 Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Created PID file /usr/local/samba/var/run/ctdb/ctdbd.pid Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Removed
2018 May 07
2
CTDB Path
Hello, i'm still trying to find out what is the right path for ctdb.conf (ubuntu 18.04, samba was compiled from source!!). When im trying to start CTDB without any config file, my log in /usr/local/samba/var/log/log.ctdb shows me: 2018/05/07 12:56:44.363513 ctdbd[4503]: Failed to read nodes file "/usr/local/samba/etc/ctdb/nodes" 2018/05/07 12:56:44.363546 ctdbd[4503]: Failed to
2018 Feb 26
2
答复: [ctdb] Unable to take recovery lock - contention
------------------原始邮件------------------ 发件人:朱尚忠10137461 收件人:samba@lists.samba.org <samba@lists.samba.org> 日 期 :2018年02月26日 17:10 主 题 :[ctdb] Unable to take recovery lock - contention When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The
2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
Hi everybody, I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not understand some points. It is possible to run the CTDB defining it under services section in cluster.conf but running it on the second node shuts down the process at the first one. My CTDB configuration implies 2 active-active nodes. Does CTDB care if the node starts with clean_start="0" or
2019 Oct 01
3
CTDB and nfs-ganesha
Hi there ? I seem to be having trouble wrapping my brain about the CTDB and ganesha configuration. I thought I had it figured out, but it doesn?t seem to be doing any checking of the nfs-ganesha service. I put nfs-ganesha-callout as executable in /etc/ctdb I create nfs-checks-ganesha.d folder in /etc/ctdb and in there I have 20.nfs_ganesha.check In my ctdbd.conf file I have: # Options to
2019 Feb 25
2
glusterfs + ctdb + nfs-ganesha , unplug the network cable of serving node, takes around ~20 mins for IO to resume
Hi all We did some failover/failback tests on 2 nodes��A and B�� with architecture 'glusterfs + ctdb(public address) + nfs-ganesha'�� 1st: During write, unplug the network cable of serving node A ->NFS Client took a few seconds to recover to conitinue writing. After some minutes, plug the network cable of serving node A ->NFS Client also took a few seconds to recover