Displaying 4 results from an estimated 4 matches for "sambaclust".
Did you mean:
  samba_list
  
2014 Feb 19
2
Samba4: Strange Behaveiour On Home share with 2 DC replicating /vfs glusterfs
...me\testneu
The reason is a different UID!? EX.: on the first DC 3000030 on the second
3000023!?
How can I fix this?
Greetings Daniel
On DC1:
[home]
comment=home s4master verzeichnis auf gluster node1
vfs objects= recycle, glusterfs
recycle:repository= /%P/%U/.Papierkorb
glusterfs:volume= sambacluster
glusterfs:volfile_server = 172.17.1.1
recycle:exclude = *.tmp,*.temp,*.log,*.ldb,*.TMP,?~$*,~$*
recycle:keeptree = Yes
recycle:exclude_dir = .Papierkorb,tmp,temp,profile,.profile
recycle:touch_mtime = yes
recycle:versions = Yes
msdfs root=yes
path=/ads/home
read only=no
posix locking =NO
kernel s...
2018 Jun 28
1
CTDB upgrade to SAMBA 4.8.3
I'm sorry you're right:
my "local" smb.conf on each client:
[global]
  clustering = yes
  include =registry
net conf list (output registry):
[global]
         security = ads
         netbios name = sambacluster
         realm = DOMAINNAME.de
         workgroup = DOMAINNAME
         idmap config *:backend = tdb
         idmap config *:range = 3000-7999
         idmap config domainname:backend = ad
         idmap config domainname:range = 10000-999999
         idmap config domainname:schema_mode = rfc2307...
2016 Feb 09
2
WG: After Upgrade to Samba-4.3.4
What I have done bevor updating to 4.3.4 and it was working until then.
I userd the map unix tab in ADUC and gave uid and gid to all users /groups  but administrator.
This worked until the update. Now the dcs mix up only!!! group ids with computer ids (security tab)
root at s4slave exim]# getent group personal
TPLK\personal:x:3000044:
root at s4slave exim]# getent group reserve09$
2018 Jun 28
4
CTDB upgrade to SAMBA 4.8.3
Hello,
i upgraded my ctdb cluster (3 nodes) from samba 4.7.7 to 4.8.3. Followed 
the steps under "policy" in this wikipage 
https://wiki.samba.org/index.php/Upgrading_a_CTDB_cluster. I shutdown 
all CTDB nodes and upgraded them. After the upgrade i started all nodes 
and ctdb status shows:
Number of nodes:3
pnn:0 192.168.199.52   OK (THIS NODE)
pnn:1 192.168.199.53   OK
pnn:2