Hi, again, another question that I ran into. Still with my test setup, SLES11 x86_64, lustre 1.8.1.1. I tried to enable root squash, but I guess I must have done sth. wrong. I want to enable root squash, with just one exception, therefore on the MGS/MDT I did: mgs-mds:/lustre # umount bar mgs-mds:/lustre # tunefs.lustre --erase-params --param "mdt.root_squash=65534:65534" --param "mdt.nosquash_nids=10.0.0.82 at tcp" -- mgsnode="10.0.0.81 at tcp" /dev/xvdb2 checking for existing Lustre data: found CONFIGS/mountdata Reading CONFIGS/mountdata Read previous values: Target: bar-MDT0000 Index: 0 Lustre FS: bar Mount type: ldiskfs Flags: 0x441 (MDT update ) Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr Parameters: mgsnode=10.0.0.81 at tcp Permanent disk data: Target: bar-MDT0000 Index: 0 Lustre FS: bar Mount type: ldiskfs Flags: 0x441 (MDT update ) Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr Parameters: mdt.root_squash=65534:65534 mdt.nosquash_nids=10.0.0.82 at tcp mgsnode=10.0.0.81 at tcp Writing CONFIGS/mountdata and mounted the filesystem again: mgs-mds:/lustre # mount -t lustre /dev/xvdb2 /lustre/bar-mdt However, when I run lctl get_param I get this: mgs-mds:/lustre # lctl get_param mdt.bar-MDT0000.root_squash error: get_param: /proc/{fs,sys}/{lnet,lustre}/mdt/bar-MDT0000/root_squash: Found no match mgs-mds:/lustre # lctl get_param mdt.bar-MDT0000.nosquash_nids error: get_param: /proc/{fs,sys}/{lnet,lustre}/mdt/bar-MDT0000/nosquash_nids: Found no match The value to the parameter mdt.nosquash_nids was taken from lctl list_nids on the client host, so I guess that should be fine. Then also mounted the two bar OSTs, and on the clients (10.0.0.89 and 10.0.0.82) Then still on both clients, I can still delete files as root from files owned by ordinary users, with permissions like 600. regards, Sebastian