On 07/31/2015 05:33 PM, J?ri Palis wrote:> Playing around with my GlusterFS test setup I discovered following anomaly
>
> On volume with low access traffic, ACLs on directories (managed over NFS)
sort of work, I can add new ACL entry to directory and when I execute getfacl
right after setting ACL, it displays correct settings. However this data is
never replicated to another GlusterFS node hosting this particular volume and
ACL disappears few minutes after setting it.
>
> Same operation when performed on file works correctly, ACL entry set to
particular file on one GlusterFS NFS server is replicated to participating node
almost immediately.
>
> So, it would be interesting to see if someone here can replicate this
anomaly.
>
> J.
I could reproduce this similar issue - After remounting the volume, the
directory ACLs are not displayed. I shall look further into this and
update the findings.
# getfacl /mnt/dir5
getfacl: Removing leading '/' from absolute path names
# file: mnt/dir5
# owner: root
# group: root
user::rwx
group::r-x
group:tmpgroup:r-x
mask::r-x
other::r-x
#
# umount /mnt
# mount -t nfs 10.70.xx.xx:/vol0 /mnt
# getfacl /mnt/dir5
getfacl: Removing leading '/' from absolute path names
# file: mnt/dir5
# owner: root
# group: root
user::rwx
group::r-x
other::r-x
#
Though these ACLs are displayed when done getfacl using brick-path directly.
Thanks,
Soumya>
> On 31 Jul 2015, at 09:35, Soumya Koduri <skoduri at redhat.com>
wrote:
>
>> I have tested it using the gluster-NFS server with GlusterFS version
3.7.* running on a RHEL7 machine and RHEL 6.7 as NFS client. ACLs with named
groups got properly set on the directory.
>>
>> Could you please provide us the packet trace (better taken on the
server side so that we can check Gluster operations too) while doing setfacl and
getfacl ?
>>
>> Thanks,
>> Soumya
>>
>> On 07/30/2015 07:38 PM, J?ri Palis wrote:
>>> Hi,
>>>
>>> Mounted GlusterFS volume with native mount and ACL?s are working as
>>> expected, mounted same volume with nfs protocol and the result is
>>> exactly the same as I described below. ACL set to files work and
ACL set
>>> to directory do not work as expected. Ohh, I?m out of ideas :(
>>>
>>> J.
>>> On 30 Jul 2015, at 16:38, J?ri Palis <jyri.palis at gmail.com
>>> <mailto:jyri.palis at gmail.com>> wrote:
>>>
>>>>
>>>> [2015-07-30 13:16:01.002296] T
[rpcsvc.c:316:rpcsvc_program_actor]
>>>> 0-rpc-service: Actor found: ACL3 - SETACL for 10.1.1.32:742
>>>> [2015-07-30 13:16:01.002325] T [MSGID: 0]
[acl3.c:672:acl3svc_setacl]
>>>> 0-nfs-ACL: FH to Volume: acltest
>>>> [2015-07-30 13:16:01.004287] T
[rpcsvc.c:1319:rpcsvc_submit_generic]
>>>> 0-rpc-service: submitted reply for rpc-message (XID:
0x16185ddc,
>>>> Program: ACL3, ProgVers: 3, Proc: 2) to rpc-transport
(socket.nfs-server)
>>>> [2015-07-30 13:16:22.823894] T
[rpcsvc.c:316:rpcsvc_program_actor]
>>>> 0-rpc-service: Actor found: ACL3 - GETACL for 10.1.1.32:742
>>>> [2015-07-30 13:16:22.823900] T [MSGID: 0]
[acl3.c:532:acl3svc_getacl]
>>>> 0-nfs-ACL: FH to Volume: acltest
>>>> [2015-07-30 13:16:22.824218] D [MSGID: 0]
>>>> [client-rpc-fops.c:1156:client3_3_getxattr_cbk]
0-acltest-client-1:
>>>> remote operation failed: No data available. Path:
>>>> <gfid:12f02b4f-a181-47d4-9b5b-69e889483570>
>>>> (12f02b4f-a181-47d4-9b5b-69e889483570). Key:
system.posix_acl_default
>>>> [2015-07-30 13:16:22.825675] D [MSGID: 0]
>>>> [client-rpc-fops.c:1156:client3_3_getxattr_cbk]
0-acltest-client-0:
>>>> remote operation failed: No data available. Path:
>>>> <gfid:12f02b4f-a181-47d4-9b5b-69e889483570>
>>>> (12f02b4f-a181-47d4-9b5b-69e889483570). Key:
system.posix_acl_default
>>>> [2015-07-30 13:16:22.825713] T
[rpcsvc.c:1319:rpcsvc_submit_generic]
>>>> 0-rpc-service: submitted reply for rpc-message (XID:
0x63815edc,
>>>> Program: ACL3, ProgVers: 3, Proc: 1) to rpc-transport
(socket.nfs-server)
>>>> [2015-07-30 13:16:22.828243] T
[rpcsvc.c:316:rpcsvc_program_actor]
>>>> 0-rpc-service: Actor found: ACL3 - SETACL for 10.1.1.32:742
>>>> [2015-07-30 13:16:22.828266] T [MSGID: 0]
[acl3.c:672:acl3svc_setacl]
>>>> 0-nfs-ACL: FH to Volume: acltest
>>>> [2015-07-30 13:16:22.829931] T
[rpcsvc.c:1319:rpcsvc_submit_generic]
>>>> 0-rpc-service: submitted reply for rpc-message (XID:
0x75815edc,
>>>> Program: ACL3, ProgVers: 3, Proc: 2) to rpc-transport
(socket.nfs-server)
>>>>
>>>> Enabled trace for few moments and tried to make any sense of it
by
>>>> searching for lines containing ?acl? according to this
everything
>>>> kind of works except lines which state that ?remote operation
failed?
>>>> GlusterFS failed to replicate or commit acl changes?
>>>>
>>>>>
>>>>>
>>>>> On 07/30/2015 06:22 PM, J?ri Palis wrote:
>>>>>> Hi,
>>>>>>
>>>>>> Thanks Niels, your hints about those two options did
the trick although
>>>>>> I had to enable both of them and I had to add nscd
(sssd provides user
>>>>>> identities) to this mix as well.
>>>>>>
>>>>>> Now back to the problem with ACL?s. Is your test setup
something like
>>>>>> this: GlusterFS 3.7.2 replicated volume on Centos/RHEL
7 and client or
>>>>>> clients accessing GlusterFS volumes by NFS protocol,
correct?
>>>>>>
>>>>> As Jiffin had suggested, did you try the same command on
GlusterFS
>>>>> Native mount?
>>>>>
>>>>> Log levels can be increased to TRACE/DEBUG mode using the
command
>>>>> 'gluster vol set <volname>
diagnostics.client-log-level [TRACE,DEBUG]'
>>>>>
>>>>> Also please capture a packet trace on the server-side using
the
>>>>> command - 'tcpdump -i any -s 0 -w /var/tmp/nfs-acl.pcap
tcp and not
>>>>> port 22'
>>>>>
>>>>> Verify the packets sent by Gluster-NFS process to the brick
process
>>>>> to set the ACL.
>>>>>
>>>>> Thanks,
>>>>> Soumya
>>>>>
>>>>>> # gluster volume info acltest
>>>>>> Volume Name: acltest
>>>>>> Type: Replicate
>>>>>> Volume ID: 9e0de3f5-45ba-4612-a4f1-16bc5d1eb985
>>>>>> Status: Started
>>>>>> Number of Bricks: 1 x 2 = 2
>>>>>> Transport-type: tcp
>>>>>> Bricks:
>>>>>> Brick1: vfs-node-01:/data/gfs/acltest/brick0/brick
>>>>>> Brick2: vfs-node-02:/data/gfs/acltest/brick0/brick
>>>>>> Options Reconfigured:
>>>>>> server.manage-gids: on
>>>>>> nfs.server-aux-gids: on
>>>>>> performance.readdir-ahead: on
>>>>>> server.event-threads: 32
>>>>>> performance.cache-size: 2GB
>>>>>> storage.linux-aio: on
>>>>>> nfs.disable: off
>>>>>> performance.write-behind-window-size: 1GB
>>>>>> performance.nfs.io-cache: on
>>>>>> performance.nfs.write-behind-window-size: 250MB
>>>>>> performance.nfs.stat-prefetch: on
>>>>>> performance.nfs.read-ahead: on
>>>>>> performance.nfs.io-threads: on
>>>>>> cluster.readdir-optimize: on
>>>>>> network.remote-dio: on
>>>>>> auth.allow: 10.1.1.32,10.1.1.42
>>>>>> diagnostics.latency-measurement: on
>>>>>> diagnostics.count-fop-hits: on
>>>>>> nfs.rpc-auth-allow: 10.1.1.32,10.1.1.42
>>>>>> nfs.trusted-sync: on
>>>>>>
>>>>>> Maybe there is a way to increase verbosity of nfs
server which could
>>>>>> help me to trace this problem. I did not find any good
hints for
>>>>>> increasing verbosity of nfs server in documentation.
>>>>>>
>>>>>> Regards,
>>>>>> J.
>>>>>>
>>>>>> On 30 Jul 2015, at 10:09, Jiffin Tony Thottan
<jthottan at redhat.com
>>>>>> <mailto:jthottan at redhat.com>
>>>>>> <mailto:jthottan at redhat.com>> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 29/07/15 20:14, Niels de Vos wrote:
>>>>>>>> On Wed, Jul 29, 2015 at 05:22:31PM +0300, J?ri
Palis wrote:
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> Another issue with NFS and sec=sys mode. As
we all know there is a
>>>>>>>>> limit of 15 security ids involved when
running NFS in sec=sys mode.
>>>>>>>>> This limit makes effective and granular
usage of ACL assigned through
>>>>>>>>> groups almost unusable. One way to overcome
this limit is to use
>>>>>>>>> kerberised NFS but GlusterFS does not
natively support this access
>>>>>>>>> mode . Another option, at least according
to one email thread,
>>>>>>>>> states
>>>>>>>>> that GlusterFS has an option
server.manage-gids which should
>>>>>>>>> mitigate
>>>>>>>>> this limit and raise it to 90 something.
Is this the option, which
>>>>>>>>> can be used for increasing sec=sys limit.
Sadly documentation
>>>>>>>>> does not
>>>>>>>>> have clear description about this option,
what exactly this option
>>>>>>>>> does and how it should be used.
>>>>>>>> server.manage-gids is an option to resolve the
groups of a uid in the
>>>>>>>> brick process. You probably need to also use
the nfs.server-aux-gids
>>>>>>>> option so that the NFS-server resolves the gids
of the uid
>>>>>>>> accessing the
>>>>>>>> NFS-server.
>>>>>>>>
>>>>>>>> The nfs.server-aux-gids option is used to
overcome the
>>>>>>>> AUTH_SYS/AUTH_UNIX limit of (I thought 32?)
groups.
>>>>>>>>
>>>>>>>> The server.manage-gids option is used to
overcome the GlusterFS
>>>>>>>> protocol
>>>>>>>> limit of ~93 groups.
>>>>>>>>
>>>>>>>> If your users do not belong to 90+ groups, you
would not need to
>>>>>>>> set the
>>>>>>>> server.manage-gids option, and
nfs.server-aux-gids might be
>>>>>>>> sufficient.
>>>>>>>>
>>>>>>>> HTH,
>>>>>>>> Niels
>>>>>>>>
>>>>>>>>> J.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 29 Jul 2015, at 16:16, Jiffin Tony
Thottan
>>>>>>>>> <jthottan at redhat.com
<mailto:jthottan at redhat.com>
>>>>>>>>> <mailto:jthottan at redhat.com>>
wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 29/07/15 18:04, J?ri Palis wrote:
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> setfacl for dir on local
filesystem:
>>>>>>>>>>>
>>>>>>>>>>> 1. set acl setfacl -m
g:x_meie_sec-test02:rx test
>>>>>>>>>>> 2. get acl
>>>>>>>>>>>
>>>>>>>>>>> # getfacl test
>>>>>>>>>>> user::rwx
>>>>>>>>>>> group::r-x
>>>>>>>>>>> group:x_meie_sec-test02:r-x
>>>>>>>>>>> mask::r-x
>>>>>>>>>>> other::r-x
>>>>>>>>>>>
>>>>>>>>>>> setfacl for dir on GlusterFS volume
which is NFS mounted to client
>>>>>>>>>>> system
>>>>>>>>>>>
>>>>>>>>>>> 1. same command is used for setting
ACE, no error is returned by
>>>>>>>>>>> that command
>>>>>>>>>>> 2. get acl
>>>>>>>>>>>
>>>>>>>>>>> #getfacl test
>>>>>>>>>>> user::rwx
>>>>>>>>>>> group::r-x
>>>>>>>>>>> other::---
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> If I use ordinary file as a target
on GlusterFS like this
>>>>>>>>>>>
>>>>>>>>>>> setfacl -m g:x_meie_sec-test02:rw
dummy
>>>>>>>>>>>
>>>>>>>>>>> then ACE entry is set for file
dummy stored on GlusterFS
>>>>>>>>>>>
>>>>>>>>>>> # getfacl dummy
>>>>>>>>>>> user::rw-
>>>>>>>>>>> group::r--
>>>>>>>>>>> group:x_meie_sec-test02:rw-
>>>>>>>>>>> mask::rw-
>>>>>>>>>>> other::?
>>>>>>>>>>>
>>>>>>>>>>> So, as you can see setting ACLs for
files works but does not work
>>>>>>>>>>> for directories.
>>>>>>>>>>>
>>>>>>>>>>> This all is happening on CentOS7,
running GlusterFS 3.7.2
>>>>>>>>>> Hi Jyri,
>>>>>>>>>>
>>>>>>>>>> It seems there are couple of issues ,
>>>>>>>>>>
>>>>>>>>>> 1.) when u set a named group acl for
file/directory, it clears the
>>>>>>>>>> permission of others too.
>>>>>>>>>> 2.) named group acl is not working
properly for directories ,
>>>>>>>>>>
>>>>>>>>>> I will try the same on my setup and
share my findings.
>>>>>>>>>> --
>>>>>>>>>> Jiffin
>>>>>>>
>>>>>>> In my setup (glusterfs 3.7.2 and RHEL 7.1 client)
it worked properly
>>>>>>>
>>>>>>> I followed the same steps mentioned by you.
>>>>>>> #cd /mnt
>>>>>>> # mkdir dir
>>>>>>> # touch file
>>>>>>> # getfacl file
>>>>>>> # file: file
>>>>>>> # owner: root
>>>>>>> # group: root
>>>>>>> user::rw-
>>>>>>> group::r--
>>>>>>> other::r--
>>>>>>>
>>>>>>> # getfacl dir
>>>>>>> # file: dir
>>>>>>> # owner: root
>>>>>>> # group: root
>>>>>>> user::rwx
>>>>>>> group::r-x
>>>>>>> other::r-x
>>>>>>>
>>>>>>> # setfacl -m g:gluster:rw file
>>>>>>> # getfacl file
>>>>>>> # file: file
>>>>>>> # owner: root
>>>>>>> # group: root
>>>>>>> user::rw-
>>>>>>> group::r--
>>>>>>> group:gluster:rw-
>>>>>>> mask::rw-
>>>>>>> other::r--
>>>>>>>
>>>>>>> setfacl -m g:gluster:r-x dir
>>>>>>> getfacl dir
>>>>>>> # file: dir
>>>>>>> # owner: root
>>>>>>> # group: root
>>>>>>> user::rwx
>>>>>>> group::r-x
>>>>>>> group:gluster:r-x
>>>>>>> mask::r-x
>>>>>>> other::r-x
>>>>>>>
>>>>>>>
>>>>>>> So can u share the following information from the
server.
>>>>>>> 1.) gluster vol info
>>>>>>> 2.) nfs.log (nfs-server log)
>>>>>>> 3.) brick logs
>>>>>>>
>>>>>>> and also can u try the same on fuse mount(gluster
native mount).
>>>>>>>
>>>>>>> --
>>>>>>> Jiffin
>>>>>>>
>>>>>>>>>>> J.
>>>>>>>>>>> On 29 Jul 2015, at 15:16, Jiffin
Thottan <jthottan at redhat.com
>>>>>>>>>>> <mailto:jthottan at
redhat.com>
>>>>>>>>>>> <mailto:jthottan at
redhat.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> ----- Original Message -----
>>>>>>>>>>>> From: "J?ri Palis"
<jyri.palis at gmail.com
>>>>>>>>>>>> <mailto:jyri.palis at
gmail.com>
>>>>>>>>>>>> <mailto:jyri.palis at
gmail.com>>
>>>>>>>>>>>> To:gluster-users at gluster.org
>>>>>>>>>>>> <mailto:gluster-users at
gluster.org><mailto:gluster-users at gluster.org>
>>>>>>>>>>>> Sent: Wednesday, July 29, 2015
4:19:20 PM
>>>>>>>>>>>> Subject: [Gluster-users]
GlusterFS 3.7.2 and ACL
>>>>>>>>>>>>
>>>>>>>>>>>> Hi
>>>>>>>>>>>>
>>>>>>>>>>>> Setup:
>>>>>>>>>>>> GFS 3.7.2, NFS is used for host
access
>>>>>>>>>>>>
>>>>>>>>>>>> Problem:
>>>>>>>>>>>> POSIX ACL work correctly when
ACLs are applied to files but do
>>>>>>>>>>>> not work when ACLs are applied
to directories on GFS volumes.
>>>>>>>>>>>>
>>>>>>>>>>>> How can I debug this issue more
deeply?
>>>>>>>>>>>>
>>>>>>>>>>>> Can you please explain the
issue with more details, i.e what
>>>>>>>>>>>> exactly not working properly ,
is it setting acl or any
>>>>>>>>>>>> functionality issue, in which
client?
>>>>>>>>>>>> __
>>>>>>>>>>>> Jiffin
>>>>>>>>>>>>
>>>>>>>>>>>> Regards,
>>>>>>>>>>>> Jyri
>>>>>>>>>>>>
_______________________________________________
>>>>>>>>>>>> Gluster-users mailing list
>>>>>>>>>>>> Gluster-users at gluster.org
>>>>>>>>>>>> <mailto:Gluster-users at
gluster.org><mailto:Gluster-users at gluster.org>
>>>>>>>>>>>>
http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>>>>>>>
_______________________________________________
>>>>>>>>>>> Gluster-users mailing list
>>>>>>>>>>> Gluster-users at gluster.org
>>>>>>>>>>> <mailto:Gluster-users at
gluster.org><mailto:Gluster-users at gluster.org>
>>>>>>>>>>>
http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>>>>>>
_______________________________________________
>>>>>>>>>> Gluster-users mailing list
>>>>>>>>>> Gluster-users at gluster.org
>>>>>>>>>> <mailto:Gluster-users at
gluster.org><mailto:Gluster-users at gluster.org>
>>>>>>>>>>
http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>>>>>
_______________________________________________
>>>>>>>>> Gluster-users mailing list
>>>>>>>>> Gluster-users at gluster.org
>>>>>>>>> <mailto:Gluster-users at
gluster.org><mailto:Gluster-users at gluster.org>
>>>>>>>>>
http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>>>> _______________________________________________
>>>>>>>> Gluster-users mailing list
>>>>>>>> Gluster-users at gluster.org
>>>>>>>> <mailto:Gluster-users at
gluster.org><mailto:Gluster-users at gluster.org>
>>>>>>>>
http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Gluster-users mailing list
>>>>>>> Gluster-users at gluster.org
>>>>>>> <mailto:Gluster-users at
gluster.org><mailto:Gluster-users at gluster.org>
>>>>>>>
http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Gluster-users mailing list
>>>>>> Gluster-users at gluster.org <mailto:Gluster-users
at gluster.org>
>>>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>