Hello, I create a volume in LVM2 (thin) to use snaphots, so normal users can recover files from the snapshot. I set some parameters for samba and for the snapshots so normal users can access the snapshot and the snapshots will be listed in Windows explorer. Here my volume info ------------------------ root at cluster01:~# gluster v info Volume Name: gv1 Type: Replicate Volume ID: ce000513-7af5-4dd7-8b9c-f495dc95adb0 Status: Started Snapshot Count: 1 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: c01:/gluster/brick Brick2: c02:/gluster/brick Brick3: c03:/gluster/brick Options Reconfigured: features.show-snapshot-directory: on features.uss: enable features.barrier: disable performance.cache-size: 512MB network.ping-timeout: 10 performance.write-behind: off performance.cache-invalidation: on server.event-threads: 4 client.event-threads: 4 performance.parallel-readdir: on performance.readdir-ahead: on performance.nl-cache-timeout: 600 performance.nl-cache: on network.inode-lru-limit: 200000 performance.md-cache-timeout: 600 performance.stat-prefetch: on performance.cache-samba-metadata: on features.cache-invalidation-timeout: 600 features.cache-invalidation: on cluster.self-heal-daemon: enable cluster.data-self-heal: on cluster.metadata-self-heal: on cluster.entry-self-heal: on cluster.force-migration: on performance.cache-max-file-size: 10 performance.write-behind-window-size: 4MB performance.read-ahead: on cluster.granular-entry-heal: on storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off ------------------------ As you can see features.uss: enable and features.show-snapshot-directory: on, both are set. The snapshot is activ ------------------ root at cluster01:~# gluster snapshot info snap1_GMT-2025.01.04-09.53.36 Snapshot : snap1_GMT-2025.01.04-09.53.36 Snap UUID : 9d082fc7-1e19-4cc8-96e7-db9d0d2b68d6 Created : 2025-01-04 09:53:36 +0000 Snap Volumes: Snap Volume Name : c988b43e16ca4935a2a3c5be80083379 Origin Volume name : gv1 Snaps taken for gv1 : 1 Snaps available for gv1 : 255 Status : Started ------------------ As user "root" I can access the snapshot in my samba-share ------- root at cluster01:~# cd /glusterfs/admin-share/daten1/.snaps/snap1_GMT-2025.01.04-09.53.36/ root at cluster01:/glusterfs/admin-share/daten1/.snaps/snap1_GMT-2025.01.04-09.53.36# ls meins u1-verw ls -ld /glusterfs/admin-share/daten1/.snaps drwxr-xr-x 2 root root 4096 1. Jan 1970 //glusterfs/admin-share/daten1/.snaps ------- If I try the same as normal user I get ------- u1-verw at cluster01:/$ cd /glusterfs/admin-share/daten1/.snaps -bash: cd: /glusterfs/admin-share/daten1/.snaps: Permission denied ------- BUT. If I change into the .snaps directory before doing su - user -------- root at cluster01:/glusterfs/admin-share/daten1/.snaps/snap1_GMT-2025.01.04-09.53.36/u1-verw# su - u1-verw u1-verw at cluster01:/glusterfs/admin-share/daten1/.snaps/snap1_GMT-2025.01.04-09.53.36/u1-verw$ ls -l insgesamt 1 drwxrwx---+ 2 u1-verw domain users 42 4. Jan 10:51 v1 -------- So if the users changes directly into the snapshot he can access all the files inside the snapshot. So the problem seems to me, is the .snaps directory but why? -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0x52F6D4DD1BB68AB5.asc Type: application/pgp-keys Size: 636 bytes Desc: OpenPGP public key URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20250104/897880d5/attachment.bin> -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 236 bytes Desc: OpenPGP digital signature URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20250104/897880d5/attachment.sig>