similar to: status of Continuous availability in SMB3

Displaying 20 results from an estimated 1000 matches similar to: "status of Continuous availability in SMB3"

2016 Aug 31
0
status of Continuous availability in SMB3
On 2016-08-31 at 08:13 +0000, zhengbin.08747 at h3c.com wrote: > hi Michael Adam: > Thanks for you work on samba. Here I am looking for some advice and your help. > I have been stuck in continuous availability of samba 4.3.9 for two weeks. Continuous availability in SMB3 is an attractive feature and I am strugling to enable it. > > smb.conf, ctdb.conf are attached. Cluster file
2016 Aug 31
2
status of Continuous availability in SMB3
On Wed, Aug 31, 2016 at 10:29:51AM +0200, Michael Adam via samba wrote: > On 2016-08-31 at 08:13 +0000, zhengbin.08747 at h3c.com wrote: > > When carring out the test, I have a wireshark process run. It is confirmed that protocol smb3 is used. In the tree connection phase, server's response to client claims that it does not support DFS/Continuous Availability. > > I'm
2011 Apr 11
1
[CTDB] how does LMASTER know where the record is stored?
Greetings list, I was looking at the wiki "samba and clustering" and a ctdb.pdf, admittedly both are quite old (2006 or 2007) and I don't know how things change over years, but I just have two questions about LMASTER: < this is from pdf > LMASTER fixed ? LMASTER is based on record key only ? LMASTER knows where the record is stored ? new records are stored on LMASTER Q1.
2016 Nov 09
4
CTDB and samba private dir (+ldap)
hi everyone an attempt to set up a cluster, I'm reading around and see some howto writers would say to put "private dir on the FS cluster" - one question I have: is this correct? necessary? I have partial success, I get: $ ctdb status Number of nodes:2 pnn:0 10.5.6.32 OK pnn:1 10.5.6.49 UNHEALTHY (THIS NODE) Generation:323266562 Size:2 hash:0 lmaster:0 hash:1
2014 Jul 08
1
smbd does not start under ctdb
Hi 2 node drbd cluster with ocfs2. both nodes: openSUSE 4.1.9 with drbd 8.4 and ctdbd 2.3 All seems OK with ctdb: n1: ctdb status Number of nodes:2 pnn:0 192.168.0.10 OK (THIS NODE) pnn:1 192.168.0.11 OK Generation:1187222392 Size:2 hash:0 lmaster:0 hash:1 lmaster:1 Recovery mode:NORMAL (0) Recovery master:0 n2: ctdb status Number of nodes:2 pnn:0 192.168.0.10 OK pnn:1 192.168.0.11
2018 Jun 28
4
CTDB upgrade to SAMBA 4.8.3
Hello, i upgraded my ctdb cluster (3 nodes) from samba 4.7.7 to 4.8.3. Followed the steps under "policy" in this wikipage https://wiki.samba.org/index.php/Upgrading_a_CTDB_cluster. I shutdown all CTDB nodes and upgraded them. After the upgrade i started all nodes and ctdb status shows: Number of nodes:3 pnn:0 192.168.199.52   OK (THIS NODE) pnn:1 192.168.199.53   OK pnn:2
2016 Jul 03
4
Winbind process stuck at 100% after changing use_mmap to no
On 03/07/16 13:06, Volker Lendecke wrote: > On Fri, Jul 01, 2016 at 10:00:21AM +0100, Alex Crow wrote: >> We've had a strange issue after following the recommendations at >> https://wiki.samba.org/index.php/Ping_pong, particularly the part >> about mmap coherence. We are running CTDB/Samba over a MooseFS >> clustered FS, and we'd not done the ping-pong before.
2014 Jul 11
1
ctdb PARTIALLYONLINE
drbd ctdb ocfs2 Hi Everything seems OK apart from the IP takeover. public_addresses 192.168.1.80/24 enp0s3 192.168.1.81/24 enp0s3 ctdb status Number of nodes:2 pnn:0 192.168.1.10 PARTIALLYONLINE pnn:1 192.168.1.11 PARTIALLYONLINE (THIS NODE) Generation:2090727463 Size:2 hash:0 lmaster:0 hash:1 lmaster:1 Recovery mode:NORMAL (0) Recovery master:1 but we are getting: 2014/07/11
2020 Aug 05
2
CTDB question about "shared file system"
Could I impose upon someone to provide some guidance? Some hint? Thank you Is a shared file system actually required? If etcd is used to manage the global recovery lock, is there any need at that point for a shared file system? In other words, are there samba or CTDB files (state) that must be on a shared file system, or can each clustered host simply have these files locally? What must be
2008 Dec 04
1
Join multiple CTDB managed Samba servers into Active Directory
Hi , I have set up a 2-node CTDB cluster serving NFS and CIFS authenticating Windows and Linux users via Active Directory. The setup works fine, except only one server in the CTDB-cluster is able to join the AD domain at a given instance. If you manually add the other server into AD, the already connected server gets disconnected. There is no specific error message logged in /var/log/message or
2018 May 04
2
CTDB Path
Hello, at this time i want to install a CTDB Cluster with SAMBA 4.7.7 from SOURCE! I compiled samba as follow: |./configure| |--with-cluster-support ||--with-shared-modules=idmap_rid,idmap_tdb2,idmap_ad| The whole SAMBA enviroment is located in /usr/local/samba/. CTDB is located in /usr/local/samba/etc/ctdb. I guess right that the correct path of ctdbd.conf (node file, public address file
2016 Sep 01
0
答复: status of Continuous availability in SMB3
Where is the patch?I can help to get this integrated -----邮件原件----- 发件人: Jeremy Allison [mailto:jra at samba.org] 发送时间: 2016年9月1日 7:57 收件人: Michael Adam 抄送: zhengbin 08747 (RD); weidong 12656 (RD); 'samba at lists.samba.org' 主题: Re: [Samba] status of Continuous availability in SMB3 On Wed, Aug 31, 2016 at 10:29:51AM +0200, Michael Adam via samba wrote: > On 2016-08-31 at 08:13 +0000,
2019 Nov 15
4
[PATCH 0/2] drm/nouveau: remove some set but not used variables
zhengbin (2): drm/nouveau: remove set but not used variable 'pclks','width' drm/nouveau: remove set but not used variable 'mem' drivers/gpu/drm/nouveau/dispnv04/arb.c | 6 ++---- drivers/gpu/drm/nouveau/nouveau_ttm.c | 4 ---- 2 files changed, 2 insertions(+), 8 deletions(-) -- 2.7.4
2019 Dec 18
1
[PATCH v2] drm/nouveau/mmu: Remove unneeded semicolon
Fixes coccicheck warning: drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c:583:2-3: Unneeded semicolon drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h:307:2-3: Unneeded semicolon Reported-by: Hulk Robot <hulkci at huawei.com> Signed-off-by: zhengbin <zhengbin13 at huawei.com> --- v1->v2: add missing one space after the closing curly bracket
2019 Dec 16
1
[PATCH] drm/nouveau/mmu: Remove unneeded semicolon
Fixes coccicheck warning: drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c:583:2-3: Unneeded semicolon drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h:307:2-3: Unneeded semicolon Reported-by: Hulk Robot <hulkci at huawei.com> Signed-off-by: zhengbin <zhengbin13 at huawei.com> --- drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c | 2 +- drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h | 2 +- 2
2019 Aug 23
2
plenty of vacuuuming processes
Hi, I have a ctdb cluster with 3 nodes and 3 glusterfs (version 6) nodes up and running. I observe plenty of these situations: A connected Windows-10 client doesn't react anymore. I use forder redirections.? - Smbstatus shows up some (auth in progress) processes. - In the logs of a ctdb node I get: Aug 23 10:12:29 ctdb-1 ctdbd[2167]: Ending traverse on DB locking.tdb (id 568831), records
2016 Jul 03
2
Winbind process stuck at 100% after changing use_mmap to no
On 03/07/16 21:47, Volker Lendecke wrote: > On Sun, Jul 03, 2016 at 08:42:36PM +0100, Alex Crow wrote: >> I've only put the "private dir" onto MooseFS, as instructed in the CTDB >> docs. > Can you quote these docs, so that we can correct them? > >> So, in that case, I'm assuming from your comments that it is no worry >> that the mmap test does not
2012 May 11
0
CTDB daemon crashed on bringing down one node in the cluster
All, I have a 3 node CTDB cluster which serves 4 'public addresses'. /etc/ctdb/public_addresses file is node specific and present in the above path in participating nodes. All the nodes run RHEL 6.2. Other ctdb config files such as "nodes" and "public_addresses" are placed on a shared filesystem mounted on a known location (say, /gluster/lock) On starting CTDB
2020 Oct 15
2
setlmasterrole in config
On Thu, Oct 15, 2020 at 09:01:01AM -0400, Robert Buck via samba wrote: > Can someone please respond to this question? We're unsure how to > persistently set these flags, which are VERY useful for performance from > what we see. We want to ensure that after reboot, particular nodes are > always set on (or others off). From the ctdb/doc/ctdb.1.xml man page file:
2014 Jul 03
0
ctdb split brain nodes doesn't see each other
Hi, I?ve setup a simple ctdb cluster. Actually copied the config file from an existing system. Thats what happens: Node 1, alone Number of nodes:2 pnn:0 10.0.0.1 OK (THIS NODE) pnn:1 10.0.0.2 DISCONNECTED|UNHEALTHY|INACTIVE Generation:1369816268 Size:1 hash:0 lmaster:0 Recovery mode:NORMAL (0) Recovery master:0 Node1, after start of ctdb on Node 2 Number of nodes:2 pnn:0