Olivier.Franki at smals.be
2013-Oct-10 10:58 UTC
[Gluster-users] Solaris NFS client and nfs.rpc
Hi,
we got a strange security issue when connecting a Solaris NFS client to
Gluster volumes
Initially, we tried to share a volume between a Linux Client (10.1.99.200)
and a Solaris Client (10.1.99.201)
We create this volume
[root at llsmagfs001a glusterfs]# gluster volume info vol1
Volume Name: vol1
Type: Distribute
Volume ID: 4abcee08-6172-441a-851b-53becb77c281
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: llsmagfs001a.cloud.testsc.sc:/export/vol1
Options Reconfigured:
diagnostics.client-log-level: DEBUG
diagnostics.brick-log-level: DEBUG
auth.allow: 10.1.99.200
nfs.rpc-auth-allow: 10.1.99.201
diagnostics.client-sys-log-level: WARNING
diagnostics.brick-sys-log-level: WARNING
The volume is exported only for the Solaris client (via
nfs.rpc-auth-allow)
[root at llsmagfs001a glusterfs]# showmount -e 10.1.99.202
Export list for 10.1.99.202:
/vol1 10.1.99.201
If we try to mount this volume via NFS from the Linux client, we receive
an access denied as expected
[root at llsmaofr001a mnt]# ifconfig eth0 | grep "inet addr"
inet addr:10.1.99.200 Bcast:10.1.99.255 Mask:255.255.254.0
[root at llsmaofr001a mnt]# mount -t nfs -o vers=3 10.1.99.202:/vol1
/mnt/vol1
mount.nfs: access denied by server while mounting 10.1.99.202:/vol1
But if we try to mount this volume from another Solaris Client
(10.1.98.66), we do not receive an access denied
# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232
index 1
inet 127.0.0.1 netmask ff000000
bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 10.1.98.66 netmask fffffe00 broadcast 10.1.99.255
ether 0:14:4f:5e:32:aa
# mount -o vers=3 nfs://10.1.99.202/vol1 /mnt
# mount | grep nfs
/mnt on nfs://10.1.99.202/vol1
remote/read/write/setuid/devices/vers=3/xattr/dev=594001d on Thu Oct 10
11:48:15 2013
# echo "test from solaris" > /mnt/test.solaris
# ls /mnt
test.solaris
Tested with
- Solaris 10 and Solaris 11
- RHEL6
- GlusterFS 3.3.1-1, GlusterFS 3.4.0-2 and GlusterFS 3.4.1-2
Do we have to set another option to enforce rpc auth for Solaris Client ?
Debug message (when trying to mount the volume from the linux client via
NFS)
[2013-10-10 10:14:07.578302] D [socket.c:463:__socket_rwv]
0-socket.nfs-server: would have passed zero length to read/write
[2013-10-10 10:14:07.579045] D [socket.c:486:__socket_rwv]
0-socket.nfs-server: EOF on socket
[2013-10-10 10:14:07.579078] D [socket.c:2236:socket_event_handler]
0-transport: disconnecting now
[2013-10-10 10:14:07.587459] D [socket.c:463:__socket_rwv]
0-socket.nfs-server: would have passed zero length to read/write
[2013-10-10 10:14:07.588021] D [socket.c:486:__socket_rwv]
0-socket.nfs-server: EOF on socket
[2013-10-10 10:14:07.588076] D [socket.c:2236:socket_event_handler]
0-transport: disconnecting now
[2013-10-10 10:14:07.589570] D [socket.c:463:__socket_rwv]
0-socket.nfs-server: would have passed zero length to read/write
[2013-10-10 10:14:07.590260] D [mount3.c:912:mnt3svc_mnt] 0-nfs-mount:
dirpath: /vol1
[2013-10-10 10:14:07.590293] D [mount3.c:855:mnt3_find_export]
0-nfs-mount: dirpath: /vol1
[2013-10-10 10:14:07.590309] D [mount3.c:749:mnt3_mntpath_to_export]
0-nfs-mount: Found export volume: vol1
[2013-10-10 10:14:07.590339] I [mount3.c:787:mnt3_check_client_net]
0-nfs-mount: Peer 10.1.99.200:860 not allowed
[2013-10-10 10:14:07.590353] D [mount3.c:934:mnt3svc_mnt] 0-nfs-mount:
Client mount not allowed
[2013-10-10 10:14:07.591104] D [socket.c:486:__socket_rwv]
0-socket.nfs-server: EOF on socket
[2013-10-10 10:14:07.591171] D [socket.c:2236:socket_event_handler]
0-transport: disconnecting now
Debug message (when trying to mount the volume from the solaris client via
NFS)
[2013-10-10 10:17:15.444951] D
[nfs3-helpers.c:1641:nfs3_log_fh_entry_call] 0-nfs-nfsv3: XID: 5250f479,
LOOKUP: args: FH: exportid 00000000-0000-0000-0000-000000000000, gfid
00000000-0000-0000-0000-000000000000, name: vol1
[2013-10-10 10:17:15.446010] D [nfs3-helpers.c:3458:nfs3_log_newfh_res]
0-nfs-nfsv3: XID: 5250f479, LOOKUP: NFS: 0(Call completed successfully.),
POSIX: 117(Structure needs cleaning), FH: exportid
4abcee08-6172-441a-851b-53becb77c281, gfid
00000000-0000-0000-0000-000000000001
[2013-10-10 10:17:15.446539] D
[nfs3-helpers.c:1641:nfs3_log_fh_entry_call] 0-nfs-nfsv3: XID: 5250f478,
LOOKUP: args: FH: exportid 00000000-0000-0000-0000-000000000000, gfid
00000000-0000-0000-0000-000000000000, name: vol1
[2013-10-10 10:17:15.447234] D [nfs3-helpers.c:3458:nfs3_log_newfh_res]
0-nfs-nfsv3: XID: 5250f478, LOOKUP: NFS: 0(Call completed successfully.),
POSIX: 117(Structure needs cleaning), FH: exportid
4abcee08-6172-441a-851b-53becb77c281, gfid
00000000-0000-0000-0000-000000000001
[2013-10-10 10:17:15.448077] D [socket.c:486:__socket_rwv]
0-socket.nfs-server: EOF on socket
[2013-10-10 10:17:15.448133] D [socket.c:2236:socket_event_handler]
0-transport: disconnecting now
[2013-10-10 10:17:15.469271] D [nfs3-helpers.c:1627:nfs3_log_common_call]
0-nfs-nfsv3: XID: 5ed48474, FSINFO: args: FH: exportid
4abcee08-6172-441a-851b-53becb77c281, gfid
00000000-0000-0000-0000-000000000001
[2013-10-10 10:17:15.469601] D [nfs3-helpers.c:3389:nfs3_log_common_res]
0-nfs-nfsv3: XID: 5ed48474, FSINFO: NFS: 0(Call completed successfully.),
POSIX: 117(Structure needs cleaning)
[2013-10-10 10:17:15.470341] D [nfs3-helpers.c:1627:nfs3_log_common_call]
0-nfs-nfsv3: XID: 5ed48475, FSSTAT: args: FH: exportid
4abcee08-6172-441a-851b-53becb77c281, gfid
00000000-0000-0000-0000-000000000001
[2013-10-10 10:17:15.471159] D [nfs3-helpers.c:3389:nfs3_log_common_res]
0-nfs-nfsv3: XID: 5ed48475, FSSTAT: NFS: 0(Call completed successfully.),
POSIX: 117(Structure needs cleaning)
Regards,
Olivier
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131010/1e818bc6/attachment.html>
hallo Olivier, you should start the call with better overview. (client-ip client-os Server-ip server-os action result...) I hope to understand what you mean. 99.200 linux | 99.202 linux gfs-3.3.1 | mount -o vers=3 nfs://$server/vol1 /mnt | error="access denied" 98.66 sol11 | 99.202 linux gfs-3.3.1 | mount -o vers=3 nfs://$server/vol1 /mnt | not receive an access denied Do check running "one" nfsd on nfsserver only. [1]> nfs.rpc-auth-allow: 10.1.99.201all clients should be added. i had to use the following command on solaris-client: nfsserver # gluster volume status | grep NFS NFS Server on localhost 38467 Y 20092 NFS Server on 192.168.5.153 38467 Y 9281 ## pick port number nfsclient # mount nfs://$nfsserver:38467/$vol /mnt/$vol note: - 5. dom0: running kernel-nfsd or glusterfsd-nfs only, not both yet. [1][2] [1] google: "mount kernel-nfsd glusterfs-nfsd at same time" [2] google: "xvm.gluster.nfs.overview.txt" regards Heiko
Olivier.Franki at smals.be
2013-Oct-11 16:11 UTC
[Gluster-users] Solaris NFS client and nfs.rpc
Hello Heiko,>you should start the call with better overview. > (client-ip client-os Server-ip server-os action result...)What we try to do is Client OS Server Command Expected Result 99.200 RHEL6 gfs-3.4.1 mount -t nfs -o vers=3 10.1.99.202:/vol1 /mnt Access Denied OK 99.201 Solaris10 gfs-3.4.1 mount -o vers=3 nfs://10.1.99.202:/vol1 /mnt Mount done OK 98.66 Solaris10 gfs-3.4.1 mount -o vers=3 nfs://10.1.99.202:/vol1 /mnt Access Denied NOK, mount is done Normally, only 99.201 should be able to mount via NFS (we mount linux clients via glusterfs) [root at llsmagfs001a glusterfs]# gluster volume info vol1 Volume Name: vol1 Type: Distribute Volume ID: 4abcee08-6172-441a-851b-53becb77c281 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: llsmagfs001a.cloud.testsc.sc:/export/vol1 Options Reconfigured: diagnostics.client-log-level: DEBUG diagnostics.brick-log-level: DEBUG auth.allow: 10.1.99.200 nfs.rpc-auth-allow: 10.1.99.201 diagnostics.client-sys-log-level: WARNING diagnostics.brick-sys-log-level: WARNING> Do check running "one" nfsd on nfsserver only. [1] > > nfs.rpc-auth-allow: 10.1.99.201 > all clients should be added. > > i had to use the following command on solaris-client: > > nfsserver # gluster volume status | grep NFS > NFS Server on localhost 38467 Y 20092 > NFS Server on 192.168.5.153 38467 Y 9281 > ## pick port number > nfsclient # mount nfs://$nfsserver:38467/$vol /mnt/$volWe use gluster 4.1, so the NFS server is started on port default port 2049 and there is no other NFS server started on gluster nodes [root at llsmagfs001a ~]# netstat -plantu |grep 2049 tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN 15432/glusterfs [root at llsmagfs001a ~]# gluster volume status |grep NFS NFS Server on localhost 2049 Y 15432 NFS Server on llsmagfs001d.cloud.testsc.sc 2049 Y 15257 NFS Server on llsmagfs001b.cloud.testsc.sc 2049 Y 15287 NFS Server on llsmagfs001c.cloud.testsc.sc 2049 Y 15379 Regards, Olivier -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131011/a0298fb0/attachment.html>