similar to: ctdb node disable windows xcopy break

Displaying 20 results from an estimated 900 matches similar to: "ctdb node disable windows xcopy break"

2016 Jul 03
2
Winbind process stuck at 100% after changing use_mmap to no
On 03/07/16 21:47, Volker Lendecke wrote: > On Sun, Jul 03, 2016 at 08:42:36PM +0100, Alex Crow wrote: >> I've only put the "private dir" onto MooseFS, as instructed in the CTDB >> docs. > Can you quote these docs, so that we can correct them? > >> So, in that case, I'm assuming from your comments that it is no worry >> that the mmap test does not
2014 Jul 08
1
smbd does not start under ctdb
Hi 2 node drbd cluster with ocfs2. both nodes: openSUSE 4.1.9 with drbd 8.4 and ctdbd 2.3 All seems OK with ctdb: n1: ctdb status Number of nodes:2 pnn:0 192.168.0.10 OK (THIS NODE) pnn:1 192.168.0.11 OK Generation:1187222392 Size:2 hash:0 lmaster:0 hash:1 lmaster:1 Recovery mode:NORMAL (0) Recovery master:0 n2: ctdb status Number of nodes:2 pnn:0 192.168.0.10 OK pnn:1 192.168.0.11
2012 May 11
0
CTDB daemon crashed on bringing down one node in the cluster
All, I have a 3 node CTDB cluster which serves 4 'public addresses'. /etc/ctdb/public_addresses file is node specific and present in the above path in participating nodes. All the nodes run RHEL 6.2. Other ctdb config files such as "nodes" and "public_addresses" are placed on a shared filesystem mounted on a known location (say, /gluster/lock) On starting CTDB
2014 Jul 03
0
ctdb split brain nodes doesn't see each other
Hi, I?ve setup a simple ctdb cluster. Actually copied the config file from an existing system. Thats what happens: Node 1, alone Number of nodes:2 pnn:0 10.0.0.1 OK (THIS NODE) pnn:1 10.0.0.2 DISCONNECTED|UNHEALTHY|INACTIVE Generation:1369816268 Size:1 hash:0 lmaster:0 Recovery mode:NORMAL (0) Recovery master:0 Node1, after start of ctdb on Node 2 Number of nodes:2 pnn:0
2016 Aug 31
0
status of Continuous availability in SMB3
On 2016-08-31 at 08:13 +0000, zhengbin.08747 at h3c.com wrote: > hi Michael Adam: > Thanks for you work on samba. Here I am looking for some advice and your help. > I have been stuck in continuous availability of samba 4.3.9 for two weeks. Continuous availability in SMB3 is an attractive feature and I am strugling to enable it. > > smb.conf, ctdb.conf are attached. Cluster file
2008 Jun 04
0
CTDB problems: 1) Unable to get tcp info for CTDB_CONTROL_TCP_CLIENT, 2) ctdb disable doesn't failover
greetings, trying to follow tridge's failover process at http://samba.org/~tridge/ctdb_movies/node_disable.html I encounter this error. oss02:~ # smbstatus -np Processing section "[homes]" Processing section "[profiles]" Processing section "[users]" Processing section "[groups]" Processing section "[local]" Processing section
2018 Jun 28
0
CTDB upgrade to SAMBA 4.8.3
On Thu, 28 Jun 2018 10:30:07 +0200 Micha Ballmann via samba <samba at lists.samba.org> wrote: > Hello, > > i upgraded my ctdb cluster (3 nodes) from samba 4.7.7 to 4.8.3. > Followed the steps under "policy" in this wikipage > https://wiki.samba.org/index.php/Upgrading_a_CTDB_cluster. I shutdown > all CTDB nodes and upgraded them. After the upgrade i started
2016 Nov 09
4
CTDB and samba private dir (+ldap)
hi everyone an attempt to set up a cluster, I'm reading around and see some howto writers would say to put "private dir on the FS cluster" - one question I have: is this correct? necessary? I have partial success, I get: $ ctdb status Number of nodes:2 pnn:0 10.5.6.32 OK pnn:1 10.5.6.49 UNHEALTHY (THIS NODE) Generation:323266562 Size:2 hash:0 lmaster:0 hash:1
2014 Jul 11
1
ctdb PARTIALLYONLINE
drbd ctdb ocfs2 Hi Everything seems OK apart from the IP takeover. public_addresses 192.168.1.80/24 enp0s3 192.168.1.81/24 enp0s3 ctdb status Number of nodes:2 pnn:0 192.168.1.10 PARTIALLYONLINE pnn:1 192.168.1.11 PARTIALLYONLINE (THIS NODE) Generation:2090727463 Size:2 hash:0 lmaster:0 hash:1 lmaster:1 Recovery mode:NORMAL (0) Recovery master:1 but we are getting: 2014/07/11
2016 Nov 09
0
samba CTDB inability to create smb.conf cache
hi everyone any experience with ctdb? I'm setting up a simple cluster, only ldap backend instead of tdb2. One (of the two) server fails this way: 50.samba: ERROR: smb.conf cache create failed $ ctdb status Number of nodes:2 pnn:0 10.5.6.32 OK pnn:1 10.5.6.49 UNHEALTHY (THIS NODE) Generation:323266562 Size:2 hash:0 lmaster:0 hash:1 lmaster:1 Recovery mode:NORMAL (0) Recovery
2016 Nov 09
0
CTDB and samba private dir (+ldap)
On 09/11/16 16:05, lejeczek via samba wrote: > hi everyone > > an attempt to set up a cluster, I'm reading around and see > some howto writers would say to put "private dir on the FS > cluster" - one question I have: is this correct? necessary? > > I have partial success, I get: > > $ ctdb status > Number of nodes:2 > pnn:0 10.5.6.32 OK >
2016 Aug 31
3
status of Continuous availability in SMB3
hi Michael Adam: Thanks for you work on samba. Here I am looking for some advice and your help. I have been stuck in continuous availability of samba 4.3.9 for two weeks. Continuous availability in SMB3 is an attractive feature and I am strugling to enable it. smb.conf, ctdb.conf are attached. Cluster file system is cephfs and mount to /CephStorage client: Windows 8 Pro root at node0:~# samba
2018 Jun 28
1
CTDB upgrade to SAMBA 4.8.3
I'm sorry you're right: my "local" smb.conf on each client: [global]  clustering = yes  include =registry net conf list (output registry): [global]         security = ads         netbios name = sambacluster         realm = DOMAINNAME.de         workgroup = DOMAINNAME         idmap config *:backend = tdb         idmap config *:range = 3000-7999         idmap config
2018 Jun 28
4
CTDB upgrade to SAMBA 4.8.3
Hello, i upgraded my ctdb cluster (3 nodes) from samba 4.7.7 to 4.8.3. Followed the steps under "policy" in this wikipage https://wiki.samba.org/index.php/Upgrading_a_CTDB_cluster. I shutdown all CTDB nodes and upgraded them. After the upgrade i started all nodes and ctdb status shows: Number of nodes:3 pnn:0 192.168.199.52   OK (THIS NODE) pnn:1 192.168.199.53   OK pnn:2
2015 Jan 09
0
Samba 4 CTDB setting Permission from Windows
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello everybody, I try to set up a GlusterFS together with CTDB. The OS on all systems is Debian wheezy. No backports aktiv. All Samba-packages are from Sernet (samba 4.14) My setup is the following: - ------------ GlusterFS: - ------------ Node1: 192.168.57.101 Node2: 192.168.57.102 Two nodes each with one disk. The disks are formated. The disks
2011 Apr 11
1
[CTDB] how does LMASTER know where the record is stored?
Greetings list, I was looking at the wiki "samba and clustering" and a ctdb.pdf, admittedly both are quite old (2006 or 2007) and I don't know how things change over years, but I just have two questions about LMASTER: < this is from pdf > LMASTER fixed ? LMASTER is based on record key only ? LMASTER knows where the record is stored ? new records are stored on LMASTER Q1.
2014 Feb 26
0
CTDB Debug Help
Hello, I've got a two node CTDB/Samba cluster that I'm having trouble with trying to add back a node after having to do an OS reload on it. The servers are running CTDB 2.5.1 and Samba 4.1.4 on AIX 7.1 TL2. The Samba CTDB databases and Samba service work fine from the node that was not reloaded. The rebuilt node is failing to re-add itself to the cluster. I'm looking for
2015 Jan 13
0
Samba 4 CTDB setting Permission from Windows
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello Davor, Am 12.01.2015 um 19:44 schrieb Davor Vusir: > 2015-01-12 17:47 GMT+01:00 Stefan Kania <stefan at kania-online.de>: > Am 11.01.2015 um 19:10 schrieb Davor Vusir: >>>> Hi Stefan! >>>> >>>> 2015-01-09 17:27 GMT+01:00 Stefan Kania >>>> <stefan at kania-online.de>: Hello
2010 May 20
0
WG: Which version of CTDB
I did a new rsync and compiled ctdb new. This version did not establish a new config file!!!: /etc/sysconfig/ctdb. I had to use my old one. After starting ctdb recognizing that ctdb wants his state directory in /usr/local/var/ctdb/state I could not fix that in /etc/sysconfig/ctdb file. So I had do mkdir /usr/local/var/ctdb manually. After starting ctdb on both nodes all nodes rest unhealthy. And
2011 Jun 11
0
ext3 and btrfs various Oops and kernel BUGs
Dear all, (please Cc) yesterday I had two bugs with btrfs and ext3 occurrences, always happening when plugging in an external USB btrfs disk. Today I had a BUG which is purely ext3 related. The last bug was with yesterdays latest git checkout. Here are the three bugs/oops: that one was with 3.0.0-rc2 (exactely) Jun 10 14:50:23 mithrandir kernel: [40871.704129] BUG: unable to handle kernel