Jochen Korge || PCSM GmbH
2022-Feb-24 19:28 UTC
[Samba] inconsistend ID mapping with rid backend and ctdb
Hi,
we realized some permission-Issues (Some users were unable to change or write
files and folders, read-permissions seemed to work as expected).
After some investigation we encounter "flapping" UID/GID mappings
between the configured RID and TDB ranges.
E.g. the group "domain-users" flaps between 3008 and 1000513, an
Admin-User Account flaps between 3097 and 1001103.
After startup it seems to take ids in the higher (rid) range and after some
hours it swaps to the lower (tdb) range.
The really strange part is, that the different gids were shown at the same time
on the three servers.
When I restart only one machine, it shows IDs in the 1m range, while the other 2
stay at 3k.
Within a machine, getent and wbinfo stay consistent, between machines (even ctdb
status shows healthy cluster) the results are sometimes inconsistent.
Only strange behavior (apart from changing IDs) I found:
wbinfo -s SomeSID
OURDOMAIN\username
Wbinfo -lookup-sids SomeSID
SomeSID -> <none>\username
What might have caused that havoc:
I changed (after the problems emerged)
idmap config OUR.DOMAIN.FQDN
to
idmap config OURDOMAIN
Setup-Information:
Our Setup consists of 3 Machines running Samba 4.13.13 (Debian Bullseye) with
CTDB as Member Servers and vfs_ceph backend. Clients are 100% Windows (from XP
to 11) and users are all from the Domain.
AD-side is one Windows 2019 DC holding all FSMO roles behind a Firewall, 2
Samba-ADDCs serving the clients and CTDB-cluster.
Relevant testparm output (consistent between machines):
[global]
clustering = Yes
kerberos method = secrets and keytab
netbios aliases = OURNASHA OURNAS01 OURNAS02 OURNAS03
netbios name = OURNASHA
realm = OUR.DOMAIN.FQDN
registry shares = Yes
security = ADS
server min protocol = NT1
server role = member server
winbind enum groups = Yes
winbind enum users = Yes
winbind expand groups = 4
winbind refresh tickets = Yes
winbind use default domain = Yes
workgroup = OURDOMAIN
smbd: backgroundqueue = no
idmap config OURDOMAIN : range = 1000000-1999999
idmap config OURDOMAIN : backend = rid
idmap config * : range = 3000-7999
ctdb:registry.tdb = yes
idmap config * : backend = tdb
admin users = @dom?nen-admins @sudo
hide unreadable = Yes
[share]
kernel share modes = No
map acl inherit = Yes
path = /share1/
read only = No
vfs objects = acl_xattr ceph_snapshots ceph
acl_xattr:ignore system acls = yes
ceph: user_id = samba.gw
ceph: config_file = /etc/ceph/ceph.conf
Help is really appreciated
Cheers Jochen
Mit freundlichen Gr??en / best regards,
Jochen Korge
Mobil +49 711 28695277
PCSM GmbH
Crailsheimerstrasse 15, 70435, Stuttgart
Tel. +49 711 230 44 96
Fax +49 711 230 44 97
Gesch?ftsf?hrer: Thomas Martin | Sitz der Gesellschaft: Stuttgart
Amtsgericht Stuttgart HRB-Nr.: 733394 / USt.-Idnr.: DE815181359
Rowland Penny
2022-Feb-24 19:58 UTC
[Samba] inconsistend ID mapping with rid backend and ctdb
On Thu, 2022-02-24 at 19:28 +0000, Jochen Korge || PCSM GmbH via samba wrote:> Error verifying signature: parse error > Hi, > > we realized some permission-Issues (Some users were unable to change > or write files and folders, read-permissions seemed to work as > expected). > > After some investigation we encounter "flapping" UID/GID mappings > between the configured RID and TDB ranges. > E.g. the group "domain-users" flaps between 3008 and 1000513, an > Admin-User Account flaps between 3097 and 1001103. > After startup it seems to take ids in the higher (rid) range and > after some hours it swaps to the lower (tdb) range. > The really strange part is, that the different gids were shown at the > same time on the three servers. > When I restart only one machine, it shows IDs in the 1m range, while > the other 2 stay at 3k. > Within a machine, getent and wbinfo stay consistent, between machines > (even ctdb status shows healthy cluster) the results are sometimes > inconsistent. > > Only strange behavior (apart from changing IDs) I found: > wbinfo -s SomeSID > OURDOMAIN\username > > Wbinfo -lookup-sids SomeSID > SomeSID -> <none>\username > > What might have caused that havoc: > I changed (after the problems emerged) > idmap config OUR.DOMAIN.FQDNThat was incorrect> to > idmap config OURDOMAINThat is correct> > Setup-Information: > Our Setup consists of 3 Machines running Samba 4.13.13 (Debian > Bullseye) with CTDB as Member Servers and vfs_ceph backend. Clients > are 100% Windows (from XP to 11) and users are all from the Domain. > AD-side is one Windows 2019 DC holding all FSMO roles behind a > Firewall, 2 Samba-ADDCs serving the clients and CTDB-cluster.How have you joined a Samba DC to a 2019 domain ?> > Relevant testparm output (consistent between machines): > [global] > clustering = Yes > kerberos method = secrets and keytab > netbios aliases = OURNASHA OURNAS01 OURNAS02 OURNAS03> netbios name = OURNASHA > realm = OUR.DOMAIN.FQDN > registry shares = Yes > security = ADS > server min protocol = NT1Why use SMBv1 ? does something rely on it.> server role = member server > winbind enum groups = Yes > winbind enum users = YesYou can remove the 'enum' lines, you do not need them.> winbind expand groups = 4 > winbind refresh tickets = Yes > winbind use default domain = Yes > workgroup = OURDOMAIN > smbd: backgroundqueue = no > idmap config OURDOMAIN : range = 1000000-1999999 > idmap config OURDOMAIN : backend = ridYou should get constant numbers now and that should include Domain Users, which should get '1000513'> idmap config * : range = 3000-7999 > ctdb:registry.tdb = yes > idmap config * : backend = tdb > admin users = @dom?nen-admins @sudo > hide unreadable = Yes > > [share] > kernel share modes = No > map acl inherit = Yes > path = /share1/ > read only = No > vfs objects = acl_xattr ceph_snapshots ceph > acl_xattr:ignore system acls = yes > ceph: user_id = samba.gw > ceph: config_file = /etc/ceph/ceph.confRowland