Pranith Kumar Karampuri
2016-Sep-01  07:31 UTC
[Gluster-users] group write permissions not being respected
hi Pat,
       I think the other thing we should probably look for would be to see
the tcp dump of what uid/gid parameters are sent over network when this
command is executed.
On Thu, Sep 1, 2016 at 7:14 AM, Pat Haley <phaley at mit.edu> wrote:
> ------------------------------------------------------------
> --------------------------------
>
> hi Pat,
>       Are you seeing this issue only after migration or even before? May
> be we should look at the gid numbers on the disk and the ones that are
> coming from client for the given user to see if they match or not?
>
> ------------------------------------------------------------
> -------------------------------------
> This issue was not being seen before the migration.  We have copied the
> /etc/passwd and /etc/group files from the front-end machine (the client) to
> the data server, so they all match
> ------------------------------------------------------------
> -------------------------------------
>
> Could you give stat output of the directory in question from both the
> brick and the nfs client
>
> ------------------------------------------------------------
> --------------------------------------
> From the server for gluster:
> [root at mseas-data2 ~]# stat /gdata/projects/nsf_alpha
>   File: `/gdata/projects/nsf_alpha'
>   Size: 4096          Blocks: 8          IO Block: 131072 directory
> Device: 13h/19d    Inode: 13094773206281819436  Links: 13
> Access: (2775/drwxrwsr-x)  Uid: (    0/    root)   Gid: (  598/nsf_alpha)
> Access: 2016-08-31 19:08:59.735990904 -0400
> Modify: 2016-08-31 16:37:09.048997167 -0400
> Change: 2016-08-31 16:37:41.315997148 -0400
>
> From the server for first underlying brick
> [root at mseas-data2 ~]# stat /mnt/brick1/projects/nsf_alpha/
>   File: `/mnt/brick1/projects/nsf_alpha/'
>   Size: 4096          Blocks: 8          IO Block: 4096   directory
> Device: 800h/2048d    Inode: 185630      Links: 13
> Access: (2775/drwxrwsr-x)  Uid: (    0/    root)   Gid: (  598/nsf_alpha)
> Access: 2016-08-31 19:08:59.669990907 -0400
> Modify: 2016-08-31 16:37:09.048997167 -0400
> Change: 2016-08-31 16:37:41.315997148 -0400
>
> From the server for second underlying brick
> [root at mseas-data2 ~]# stat /mnt/brick2/projects/nsf_alpha/
>   File: `/mnt/brick2/projects/nsf_alpha/'
>   Size: 4096          Blocks: 8          IO Block: 4096   directory
> Device: 810h/2064d    Inode: 24085468    Links: 13
> Access: (2775/drwxrwsr-x)  Uid: (    0/    root)   Gid: (  598/nsf_alpha)
> Access: 2016-08-31 19:08:59.735990904 -0400
> Modify: 2016-08-03 14:01:52.000000000 -0400
> Change: 2016-08-31 16:37:41.315997148 -0400
>
> From the client
> [root at mseas FixOwn]# stat /gdata/projects/nsf_alpha
>   File: `/gdata/projects/nsf_alpha'
>   Size: 4096          Blocks: 8          IO Block: 1048576 directory
> Device: 23h/35d    Inode: 13094773206281819436  Links: 13
> Access: (2775/drwxrwsr-x)  Uid: (    0/    root)   Gid: (  598/nsf_alpha)
> Access: 2016-08-31 19:08:59.735990904 -0400
> Modify: 2016-08-31 16:37:09.048997167 -0400
> Change: 2016-08-31 16:37:41.315997148 -0400
>
> ------------------------------------------------------------
> ------------------------------------
>
> Could you also let us know version of gluster you are using
>
> ------------------------------------------------------------
> -------------------------------------
>
>
> [root at mseas-data2 ~]# gluster --version
> glusterfs 3.7.11 built on Apr 27 2016 14:09:22
>
> [root at mseas-data2 ~]# gluster volume info
>
> Volume Name: data-volume
> Type: Distribute
> Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: mseas-data2:/mnt/brick1
> Brick2: mseas-data2:/mnt/brick2
> Options Reconfigured:
> performance.readdir-ahead: on
> nfs.disable: on
> nfs.export-volumes: off
>
> [root at mseas-data2 ~]# gluster volume status
> Status of volume: data-volume
> Gluster process                             TCP Port  RDMA Port  Online
> Pid
> ------------------------------------------------------------
> ------------------
> Brick mseas-data2:/mnt/brick1               49154     0          Y
> 5005
> Brick mseas-data2:/mnt/brick2               49155     0          Y
> 5010
>
> Task Status of Volume data-volume
> ------------------------------------------------------------
> ------------------
> Task                 : Rebalance
> ID                   : 892d9e3a-b38c-4971-b96a-8e4a496685ba
> Status               : completed
>
>
> [root at mseas-data2 ~]# gluster peer status
> Number of Peers: 0
>
>
> ------------------------------------------------------------
> -------------------------------------
>
> On Thu, Sep 1, 2016 at 2:46 AM, Pat Haley <phaley at mit.edu> wrote:
>
>>
>> Hi,
>>
>> Another piece of data.  There are 2 distinct volumes on the file server
>>
>>    1. a straight nfs partition
>>    2. a gluster volume (served over nfs)
>>
>> The straight nfs partition does respect the group write permissions,
>> while the gluster volume does not.  Any suggestions on how to debug
this or
>> what additional information would be helpful would be greatly
appreciated
>>
>> Thanks
>>
>> On 08/30/2016 06:01 PM, Pat Haley wrote:
>>
>>
>> Hi
>>
>> We have just migrated our data to a new file server (more space, old
>> server was showing its age). We have a volume for collaborative use,
based
>> on group membership.  In our new server, the group write permissions
are
>> not being respected (e.g.  the owner of a directory can still write to
that
>> directory but any other member of the associated group cannot, even
though
>> the directory clearly has group write permissions set).  This is
occurring
>> regardless of how many groups the user is a member of (i.e. users that
are
>> members of fewer then 16 groups are still affected).
>>
>> the relevant fstab line from the server looks like
>> localhost:/data-volume /gdata    glusterfs       defaults 0 0
>>
>> and for a client:
>> mseas-data2:/gdata       /gdata      nfs     defaults        0 0
>>
>> Any help would be greatly appreciated.
>>
>> Thanks
>>
>>
>> --
>>
>> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>> Pat Haley                          Email:  phaley at mit.edu
>> Center for Ocean Engineering       Phone:  (617) 253-6824
>> Dept. of Mechanical Engineering    Fax:    (617) 253-8125
>> MIT, Room 5-213                    http://web.mit.edu/phaley/www/
>> 77 Massachusetts Avenue
>> Cambridge, MA  02139-4301
>>
>> _______________________________________________ Gluster-users mailing
>> list Gluster-users at gluster.org http://www.gluster.org/mailman
>> /listinfo/gluster-users
>
> --
> Pranith
>
> --
>
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> Pat Haley                          Email:  phaley at mit.edu
> Center for Ocean Engineering       Phone:  (617) 253-6824
> Dept. of Mechanical Engineering    Fax:    (617) 253-8125
> MIT, Room 5-213                    http://web.mit.edu/phaley/www/
> 77 Massachusetts Avenue
> Cambridge, MA  02139-4301
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-- 
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20160901/a75b4a83/attachment.html>
Pat Haley
2016-Sep-01  16:34 UTC
[Gluster-users] group write permissions not being respected
Hi Pranith, Here is the output when I'm trying a touch command that fails with "Permission denied" [root at compute-11-10 ~]# tcpdump -nnSs 0 host 10.1.1.4 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes 12:30:46.248293 IP 10.255.255.124.4215828946 > 10.1.1.4.2049: 208 getattr fh 0,0/22 12:30:46.252509 IP 10.1.1.4.2049 > 10.255.255.124.4215828946: reply ok 240 getattr NON 3 ids 0/3 sz 0 12:30:46.252596 IP 10.255.255.124.4232606162 > 10.1.1.4.2049: 300 getattr fh 0,0/22 12:30:46.253308 IP 10.1.1.4.2049 > 10.255.255.124.4232606162: reply ok 52 getattr ERROR: Permission denied 12:30:46.253358 IP 10.255.255.124.4249383378 > 10.1.1.4.2049: 216 getattr fh 0,0/22 12:30:46.260347 IP 10.1.1.4.2049 > 10.255.255.124.4249383378: reply ok 52 getattr ERROR: No such file or directory 12:30:46.300306 IP 10.255.255.124.931 > 10.1.1.4.2049: Flags [.], ack 1979284005, win 501, options [nop,nop,TS val 490628016 ecr 75449144], length 0 ^C 7 packets captured 7 packets received by filter 0 packets dropped by kernel On 09/01/2016 03:31 AM, Pranith Kumar Karampuri wrote:> hi Pat, > I think the other thing we should probably look for would be to > see the tcp dump of what uid/gid parameters are sent over network when > this command is executed. > > On Thu, Sep 1, 2016 at 7:14 AM, Pat Haley <phaley at mit.edu > <mailto:phaley at mit.edu>> wrote: > > -------------------------------------------------------------------------------------------- >> hi Pat, >> Are you seeing this issue only after migration or even >> before? May be we should look at the gid numbers on the disk and >> the ones that are coming from client for the given user to see if >> they match or not? > ------------------------------------------------------------------------------------------------- > This issue was not being seen before the migration. We have > copied the /etc/passwd and /etc/group files from the front-end > machine (the client) to the data server, so they all match > ------------------------------------------------------------------------------------------------- >> Could you give stat output of the directory in question from both >> the brick and the nfs client >> > -------------------------------------------------------------------------------------------------- > From the server for gluster: > [root at mseas-data2 ~]# stat /gdata/projects/nsf_alpha > File: `/gdata/projects/nsf_alpha' > Size: 4096 Blocks: 8 IO Block: 131072 directory > Device: 13h/19d Inode: 13094773206281819436 Links: 13 > Access: (2775/drwxrwsr-x) Uid: ( 0/ root) Gid: ( 598/nsf_alpha) > Access: 2016-08-31 19:08:59.735990904 -0400 > Modify: 2016-08-31 16:37:09.048997167 -0400 > Change: 2016-08-31 16:37:41.315997148 -0400 > > From the server for first underlying brick > [root at mseas-data2 ~]# stat /mnt/brick1/projects/nsf_alpha/ > File: `/mnt/brick1/projects/nsf_alpha/' > Size: 4096 Blocks: 8 IO Block: 4096 directory > Device: 800h/2048d Inode: 185630 Links: 13 > Access: (2775/drwxrwsr-x) Uid: ( 0/ root) Gid: ( > 598/nsf_alpha) > Access: 2016-08-31 19:08:59.669990907 -0400 > Modify: 2016-08-31 16:37:09.048997167 -0400 > Change: 2016-08-31 16:37:41.315997148 -0400 > > From the server for second underlying brick > [root at mseas-data2 ~]# stat /mnt/brick2/projects/nsf_alpha/ > File: `/mnt/brick2/projects/nsf_alpha/' > Size: 4096 Blocks: 8 IO Block: 4096 directory > Device: 810h/2064d Inode: 24085468 Links: 13 > Access: (2775/drwxrwsr-x) Uid: ( 0/ root) Gid: ( > 598/nsf_alpha) > Access: 2016-08-31 19:08:59.735990904 -0400 > Modify: 2016-08-03 14:01:52.000000000 -0400 > Change: 2016-08-31 16:37:41.315997148 -0400 > > From the client > [root at mseas FixOwn]# stat /gdata/projects/nsf_alpha > File: `/gdata/projects/nsf_alpha' > Size: 4096 Blocks: 8 IO Block: 1048576 directory > Device: 23h/35d Inode: 13094773206281819436 Links: 13 > Access: (2775/drwxrwsr-x) Uid: ( 0/ root) Gid: ( > 598/nsf_alpha) > Access: 2016-08-31 19:08:59.735990904 -0400 > Modify: 2016-08-31 16:37:09.048997167 -0400 > Change: 2016-08-31 16:37:41.315997148 -0400 > > ------------------------------------------------------------------------------------------------ >> Could you also let us know version of gluster you are using >> ------------------------------------------------------------------------------------------------- > >> [root at mseas-data2 ~]# gluster --version >> glusterfs 3.7.11 built on Apr 27 2016 14:09:22 >> > [root at mseas-data2 ~]# gluster volume info > > Volume Name: data-volume > Type: Distribute > Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18 > Status: Started > Number of Bricks: 2 > Transport-type: tcp > Bricks: > Brick1: mseas-data2:/mnt/brick1 > Brick2: mseas-data2:/mnt/brick2 > Options Reconfigured: > performance.readdir-ahead: on > nfs.disable: on > nfs.export-volumes: off > > [root at mseas-data2 ~]# gluster volume status > Status of volume: data-volume > Gluster process TCP Port RDMA Port Online Pid > ------------------------------------------------------------------------------ > Brick mseas-data2:/mnt/brick1 49154 0 Y 5005 > Brick mseas-data2:/mnt/brick2 49155 0 Y 5010 > > Task Status of Volume data-volume > ------------------------------------------------------------------------------ > Task : Rebalance > ID : 892d9e3a-b38c-4971-b96a-8e4a496685ba > Status : completed > > > [root at mseas-data2 ~]# gluster peer status > Number of Peers: 0 > >> >> ------------------------------------------------------------------------------------------------- >> >> >> On Thu, Sep 1, 2016 at 2:46 AM, Pat Haley <phaley at mit.edu >> <mailto:phaley at mit.edu>> wrote: >> >> >> Hi, >> >> Another piece of data. There are 2 distinct volumes on the >> file server >> >> 1. a straight nfs partition >> 2. a gluster volume (served over nfs) >> >> The straight nfs partition does respect the group write >> permissions, while the gluster volume does not. Any >> suggestions on how to debug this or what additional >> information would be helpful would be greatly appreciated >> >> Thanks >> >> On 08/30/2016 06:01 PM, Pat Haley wrote: >>> >>> Hi >>> >>> We have just migrated our data to a new file server (more >>> space, old server was showing its age). We have a volume for >>> collaborative use, based on group membership. In our new >>> server, the group write permissions are not being respected >>> (e.g. the owner of a directory can still write to that >>> directory but any other member of the associated group >>> cannot, even though the directory clearly has group write >>> permissions set). This is occurring regardless of how many >>> groups the user is a member of (i.e. users that are members >>> of fewer then 16 groups are still affected). >>> >>> the relevant fstab line from the server looks like >>> localhost:/data-volume /gdata glusterfs defaults 0 0 >>> >>> and for a client: >>> mseas-data2:/gdata /gdata nfs defaults 0 0 >>> >>> Any help would be greatly appreciated. >>> >>> Thanks >>> >> >> -- >> >> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- >> Pat Haley Email:phaley at mit.edu <mailto:phaley at mit.edu> >> Center for Ocean Engineering Phone:(617) 253-6824 <tel:%28617%29%20253-6824> >> Dept. of Mechanical Engineering Fax:(617) 253-8125 <tel:%28617%29%20253-8125> >> MIT, Room 5-213http://web.mit.edu/phaley/www/ >> 77 Massachusetts Avenue >> Cambridge, MA 02139-4301 >> >> _______________________________________________ Gluster-users >> mailing list Gluster-users at gluster.org >> <mailto:Gluster-users at gluster.org> >> http://www.gluster.org/mailman/listinfo/gluster-users >> <http://www.gluster.org/mailman/listinfo/gluster-users> >> >> -- >> Pranith > > -- > > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- > Pat Haley Email:phaley at mit.edu <mailto:phaley at mit.edu> > Center for Ocean Engineering Phone:(617) 253-6824 <tel:%28617%29%20253-6824> > Dept. of Mechanical Engineering Fax:(617) 253-8125 <tel:%28617%29%20253-8125> > MIT, Room 5-213http://web.mit.edu/phaley/www/ > 77 Massachusetts Avenue > Cambridge, MA 02139-4301 > > _______________________________________________ Gluster-users > mailing list Gluster-users at gluster.org > <mailto:Gluster-users at gluster.org> > http://www.gluster.org/mailman/listinfo/gluster-users > <http://www.gluster.org/mailman/listinfo/gluster-users> > > -- > Pranith-- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Pat Haley Email: phaley at mit.edu Center for Ocean Engineering Phone: (617) 253-6824 Dept. of Mechanical Engineering Fax: (617) 253-8125 MIT, Room 5-213 http://web.mit.edu/phaley/www/ 77 Massachusetts Avenue Cambridge, MA 02139-4301 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160901/dc4072f8/attachment.html>