Niels,
One additional piece of info.
When tom mounted it with ACL on one client, it stopped allowing more than
32-groups on ALL the clients. Even the ones where it was FUSE mounted without
the ACL option.
To me, this was the biggest issue. A single improper mount point messing up the
32-group limitation for all fuse clients.
David (Sent from mobile)
==============================David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
David.Robinson at corvidtec.com
http://www.corvidtechnologies.com
> On Mar 5, 2015, at 4:18 PM, Niels de Vos <ndevos at redhat.com>
wrote:
>
>> On Thu, Mar 05, 2015 at 03:31:51PM -0500, Tom Young wrote:
>> Update ?
>>
>> I found that we can enable ACLs on the gluster server, and still have
>> access to more than 32 groups. I had to remove the acl option from the
>> client that was mounting the gluster volume, and everything started
working
>> the way we wanted. Thank you
>
> Great to hear, thanks for reporting the details!
>
> I was wondering about this too, but did not have the time to try it out
> yet. The 'acl' mount option causes a posix-acl xlator to be loaded
on
> the client-side. This improves the performance for ACL handling, but
> that xlator is indeed limited to 32 groups. Dropping the 'acl'
option
> from the mount command, prevents loading the posix-acl xlator. This is
> not an issue, as long as FUSE still uses ACLs when the option is not
> passed (which I was not sure of).
>
> If you want, you can file a bug about this problem so we won't forget
> about it and can look into fixing it in the future.
>
> Thanks,
> Niels
>
>>
>>
>>
>>
>>
>> Tom Young
>>
>>
>>
>> *From:* Tom Young [mailto:tom.young at corvidtec.com]
>> *Sent:* Thursday, March 05, 2015 1:36 PM
>> *To:* 'gluster-users at gluster.org'
>> *Subject:* gluster and acl support
>>
>>
>>
>> Hello,
>>
>> I would like to use ACLs on my gluster volume, and also not be
restricted
>> by the 32 group limitation if I do. I have noticed that if I enable
acl
>> support on a client, then I am restricted to using 32 groups. I have
>> several users that are part of more than 32 groups, but they still want
to
>> use ACLs on certain directories. The underlying filesystem is xfs, and
I
>> have gotten acls to work, but then my users lose access to any group
>> they?re a part of after 32.
>>
>> Has anyone encountered this, and more importantly, have you discovered
away
>> to make ACLs work with more than 32 groups?
>>
>>
>>
>> *Installed RPMs:*
>>
>> gluster-nagios-common-0.1.1-0.el6.noarch
>>
>> glusterfs-libs-3.6.2-1.el6.x86_64
>>
>> glusterfs-geo-replication-3.6.2-1.el6.x86_64
>>
>> glusterfs-devel-3.6.2-1.el6.x86_64
>>
>> glusterfs-3.6.2-1.el6.x86_64
>>
>> glusterfs-cli-3.6.2-1.el6.x86_64
>>
>> glusterfs-rdma-3.6.2-1.el6.x86_64
>>
>> glusterfs-fuse-3.6.2-1.el6.x86_64
>>
>> glusterfs-server-3.6.2-1.el6.x86_64
>>
>> glusterfs-debuginfo-3.6.2-1.el6.x86_64
>>
>> glusterfs-extra-xlators-3.6.2-1.el6.x86_64
>>
>> samba-vfs-glusterfs-4.1.11-2.el6.x86_64
>>
>> glusterfs-api-3.6.2-1.el6.x86_64
>>
>> glusterfs-api-devel-3.6.2-1.el6.x86_64
>>
>>
>>
>> */etc/fstab entry:*
>>
>> gfsib01a.corvidtec.com:/homegfs /homegfs glusterfs
>> transport=tcp,acl,_netdev 0 0
>>
>>
>>
>> *GFS Volume info:*
>>
>> Volume Name: homegfs
>>
>> Type: Distributed-Replicate
>>
>> Volume ID: 1e32672a-f1b7-4b58-ba94-58c085e59071
>>
>> Status: Started
>>
>> Number of Bricks: 4 x 2 = 8
>>
>> Transport-type: tcp
>>
>> Bricks:
>>
>> Brick1: gfsib01a.corvidtec.com:/data/brick01a/homegfs
>>
>> Brick2: gfsib01b.corvidtec.com:/data/brick01b/homegfs
>>
>> Brick3: gfsib01a.corvidtec.com:/data/brick02a/homegfs
>>
>> Brick4: gfsib01b.corvidtec.com:/data/brick02b/homegfs
>>
>> Brick5: gfsib02a.corvidtec.com:/data/brick01a/homegfs
>>
>> Brick6: gfsib02b.corvidtec.com:/data/brick01b/homegfs
>>
>> Brick7: gfsib02a.corvidtec.com:/data/brick02a/homegfs
>>
>> Brick8: gfsib02b.corvidtec.com:/data/brick02b/homegfs
>>
>> Options Reconfigured:
>>
>> server.manage-gids: on
>>
>> changelog.rollover-time: 15
>>
>> changelog.fsync-interval: 3
>>
>> changelog.changelog: on
>>
>> geo-replication.ignore-pid-check: on
>>
>> geo-replication.indexing: off
>>
>> storage.owner-gid: 100
>>
>> network.ping-timeout: 10
>>
>> server.allow-insecure: on
>>
>> performance.write-behind-window-size: 128MB
>>
>> performance.cache-size: 128MB
>>
>> performance.io-thread-count: 32
>>
>>
>>
>> Thank you
>>
>>
>>
>> Tom
>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users