Hello Group, I thought that version 1.6b5 was supposed to have NFS export support. Is this true? If so, am I supposed to be able to export the lustre filesystem via NFS from a lustre client? I have tried this and have not been successful. -- Jordan Schweller Systems Developer/Engineer * (E): jordan@osc.edu * (V): 937.328.5708 * (F): 937.322.7869 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20061023/a484b0bd/attachment-0001.bin
On Mon, 2006-23-10 at 13:06 -0400, Jordan Schweller wrote:> Hello Group, > I thought that version 1.6b5 was supposed to have NFS export support. > Is this true? If so, am I supposed to be able to export the lustre > filesystem via NFS from a lustre client? I have tried this and have not > been successful.Can you tell us a bit more about what you have done and what results you are experiencing? b. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20061023/ba22f20d/attachment.bin
Sure -- here are the details: OSS: I have 3 OSS''s using DDN disks. Each OSS has 6 targets. Three are around 100GB each, and 3 at around 1TB each. I have these logically used in 2 different lustre filesystems -- a small and a larger. MDS: I have 1 MDS/MGS that contains 2 MDT and 1 MGT. The targets are using LVM. The 2 MDTs are around 25GB and 400GB, respectively for my small and large lustre filesystems. My MGT is around 10GB. Here is the important parts of df and mount: jordan@sandeep:~ $ df -h | grep lustre /dev/lustre_mds/lustre_mgs 9.9G 424M 9.0G 5% /mnt/lustre_mgs /dev/lustre_mds/sfsmall 22G 445M 21G 3% /mnt/sfsmall01-mds /dev/lustre_mds/sflarge 350G 471M 330G 1% /mnt/sflarge01-mds jordan@sandeep:~ $ mount | grep lustre /dev/lustre_mds/lustre_mgs on /mnt/lustre_mgs type lustre (rw) /dev/lustre_mds/sfsmall on /mnt/sfsmall01-mds type lustre (rw) /dev/lustre_mds/sflarge on /mnt/sflarge01-mds type lustre (rw) LUSTRE CLIENT: The 2 lustre filesystems are mounted in /mnt/. Here are the df and mount command outputs: jordan@pria:~ $ df -h | grep sf <HOST_IP>@tcp:/sfsmall 845G 4.1G 798G 1% /mnt/sfsmall <HOST_IP>@tcp:/sflarge 8.3T 4.2G 7.9T 1% /mnt/sflarge jordan@pria:~ $ mount | grep lustre <HOST_IP>@tcp:/sfsmall on /mnt/sfsmall type lustre (rw) <HOST_IP>@tcp:/sflarge on /mnt/sflarge type lustre (rw) -note: I have removed the IP that is really shown for this posting only. At this point, the lustre filesystem is working just fine. I can read and write to it from the client. --------START NFS SECTION-------------------------------------- Now NFS exporting /mnt/sfsmall from the lustre client: jordan@pria:~ $ more /etc/exports # lustre exports /mnt/sfsmall <HOST_IP>/24(rw,root_squash) root@pria:~ $ /etc/init.d/nfs restart Shutting down NFS mountd: [ OK ] Shutting down NFS daemon: [ OK ] Shutting down NFS quotas: [ OK ] Shutting down NFS services: [ OK ] Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS daemon: [ OK ] Starting NFS mountd: [ OK ] Now NFS mount the lustre export via NFS filesystem on a separate client: [root@sanjay ~]# mount pria:/mnt/sfsmall /mnt/lustre/ mount: pria:/mnt/sfsmall failed, reason given by server: Permission denied [root@sanjay ~]# This should work, but it isn''t. So the workaround that I do is export all of /mnt on the lustre client (the nfs server). This provides a different result: root@pria:~ $ more /etc/exports [ OK ] # lustre exports #/mnt/sfsmall <HOST_IP>/24(rw,root_squash) /mnt/ <HOST_IP>/24(rw,root_squash) root@pria:~ $ /etc/init.d/nfs stop Shutting down NFS mountd: [ OK ] Shutting down NFS daemon: [ OK ] Shutting down NFS quotas: [ OK ] Shutting down NFS services: [ OK ] root@pria:~ $ /etc/init.d/nfs start Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS daemon: [ OK ] Starting NFS mountd: [ OK ] root@pria:~ $ [ OK ] Now the nfs client CAN mount the filesystem: [root@sanjay ~]# mount pria:/mnt /mnt/lustre/ [root@sanjay ~]# mount | grep nfs pria:/mnt on /mnt/lustre type nfs (rw,addr=<HOST_IP>) However, the filesystem size is wrong, and you attempting to read or write to the filesystem hangs the nfs client: [root@sanjay ~]# df -h | grep pria pria:/mnt 32G 6.1G 24G 21% /mnt/lustre ***THIS SHOULD BE 845GB. [root@sanjay ~]# cd /mnt/lustre/ [root@sanjay lustre]# ls ^C --------END NFS SECTION-------------------------------------- Another thing to note is that I have tried this with only 1 lustre filesystem mounted on the lustre client (pria). However, this does not affect the outcome of the nfs client. Any help on this would be greatly appreciated. Thanks! -- Jordan Schweller Systems Developer/Engineer * (E): jordan@osc.edu * (V): 937.328.5708 * (F): 937.322.7869 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20061023/ed2ed382/attachment.bin
Sure -- here are the details: OSS: I have 3 OSS''s using DDN disks. Each OSS has 6 targets. Three are around 100GB each, and 3 at around 1TB each. I have these logically used in 2 different lustre filesystems -- a small and a larger. MDS: I have 1 MDS/MGS that contains 2 MDT and 1 MGT. The targets are using LVM. The 2 MDTs are around 25GB and 400GB, respectively for my small and large lustre filesystems. My MGT is around 10GB. Here is the important parts of df and mount: jordan@sandeep:~ $ df -h | grep lustre /dev/lustre_mds/lustre_mgs 9.9G 424M 9.0G 5% /mnt/lustre_mgs /dev/lustre_mds/sfsmall 22G 445M 21G 3% /mnt/sfsmall01-mds /dev/lustre_mds/sflarge 350G 471M 330G 1% /mnt/sflarge01-mds jordan@sandeep:~ $ mount | grep lustre /dev/lustre_mds/lustre_mgs on /mnt/lustre_mgs type lustre (rw) /dev/lustre_mds/sfsmall on /mnt/sfsmall01-mds type lustre (rw) /dev/lustre_mds/sflarge on /mnt/sflarge01-mds type lustre (rw) LUSTRE CLIENT: The 2 lustre filesystems are mounted in /mnt/. Here are the df and mount command outputs: jordan@pria:~ $ df -h | grep sf <HOST_IP>@tcp:/sfsmall 845G 4.1G 798G 1% /mnt/sfsmall <HOST_IP>@tcp:/sflarge 8.3T 4.2G 7.9T 1% /mnt/sflarge jordan@pria:~ $ mount | grep lustre <HOST_IP>@tcp:/sfsmall on /mnt/sfsmall type lustre (rw) <HOST_IP>@tcp:/sflarge on /mnt/sflarge type lustre (rw) -note: I have removed the IP that is really shown for this posting only. At this point, the lustre filesystem is working just fine. I can read and write to it from the client. --------START NFS SECTION-------------------------------------- Now NFS exporting /mnt/sfsmall from the lustre client: jordan@pria:~ $ more /etc/exports # lustre exports /mnt/sfsmall <HOST_IP>/24(rw,root_squash) root@pria:~ $ /etc/init.d/nfs restart Shutting down NFS mountd: [ OK ] Shutting down NFS daemon: [ OK ] Shutting down NFS quotas: [ OK ] Shutting down NFS services: [ OK ] Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS daemon: [ OK ] Starting NFS mountd: [ OK ] Now NFS mount the lustre export via NFS filesystem on a separate client: [root@sanjay ~]# mount pria:/mnt/sfsmall /mnt/lustre/ mount: pria:/mnt/sfsmall failed, reason given by server: Permission denied [root@sanjay ~]# This should work, but it isn''t. So the workaround that I do is export all of /mnt on the lustre client (the nfs server). This provides a different result: root@pria:~ $ more /etc/exports [ OK ] # lustre exports #/mnt/sfsmall <HOST_IP>/24(rw,root_squash) /mnt/ <HOST_IP>/24(rw,root_squash) root@pria:~ $ /etc/init.d/nfs stop Shutting down NFS mountd: [ OK ] Shutting down NFS daemon: [ OK ] Shutting down NFS quotas: [ OK ] Shutting down NFS services: [ OK ] root@pria:~ $ /etc/init.d/nfs start Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS daemon: [ OK ] Starting NFS mountd: [ OK ] root@pria:~ $ [ OK ] Now the nfs client CAN mount the filesystem: [root@sanjay ~]# mount pria:/mnt /mnt/lustre/ [root@sanjay ~]# mount | grep nfs pria:/mnt on /mnt/lustre type nfs (rw,addr=<HOST_IP>) However, the filesystem size is wrong, and you attempting to read or write to the filesystem hangs the nfs client: [root@sanjay ~]# df -h | grep pria pria:/mnt 32G 6.1G 24G 21% /mnt/lustre ***THIS SHOULD BE 845GB. [root@sanjay ~]# cd /mnt/lustre/ [root@sanjay lustre]# ls ^C --------END NFS SECTION-------------------------------------- Another thing to note is that I have tried this with only 1 lustre filesystem mounted on the lustre client (pria). However, this does not affect the outcome of the nfs client. Any help on this would be greatly appreciated. Thanks! -- Jordan Schweller Systems Developer/Engineer * (E): jordan@osc.edu * (V): 937.328.5708 * (F): 937.322.7869 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20061023/c0398c62/attachment.bin
On Mon, 2006-23-10 at 14:28 -0400, Jordan Schweller wrote:>The interesting bits...> --------START NFS SECTION-------------------------------------- > Now NFS exporting /mnt/sfsmall from the lustre client: > > jordan@pria:~ $ more /etc/exports > # lustre exports > /mnt/sfsmall <HOST_IP>/24(rw,root_squash)^ Try adding an "fsid=1," in here. You can use any integer. 1 is just an example. All filesystems exported from an NFS server need to have a unique fsid.> However, the filesystem size is wrong, and you attempting to read or > write to the filesystem hangs the nfs client:Try the above and mounting just the exported lustre filesystem instead of it''s parent and see if you still see these problems. b. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20061023/ec569366/attachment.bin
That fixed it... root@pria:~ $ more /etc/exports # lustre exports /mnt/sfsmall <HOST_IP>/24(fsid=1,rw,root_squash) [jordan@sanjay test]$ df -h | grep lustre pria:/mnt/sfsmall 845G 4.1G 798G 1% /mnt/lustre [jordan@sanjay test]$ touch file2 [jordan@sanjay test]$ ls -l total 0 -rw-rw-r-- 1 jordan jordan 0 Oct 23 14:42 file -rw-rw-r-- 1 jordan jordan 0 Oct 23 14:43 file2 Thanks Brian!! Jordan On Mon, 2006-10-23 at 14:33 -0400, Brian J. Murrell wrote:> On Mon, 2006-23-10 at 14:28 -0400, Jordan Schweller wrote: > > > > The interesting bits... > > > --------START NFS SECTION-------------------------------------- > > Now NFS exporting /mnt/sfsmall from the lustre client: > > > > jordan@pria:~ $ more /etc/exports > > # lustre exports > > /mnt/sfsmall <HOST_IP>/24(rw,root_squash) > ^ > Try adding an "fsid=1," in here. You can use any integer. 1 is just an > example. All filesystems exported from an NFS server need to have a > unique fsid. > > > However, the filesystem size is wrong, and you attempting to read or > > write to the filesystem hangs the nfs client: > > > Try the above and mounting just the exported lustre filesystem instead > of it''s parent and see if you still see these problems. > > b. > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss-- Jordan Schweller Systems Developer/Engineer * (E): jordan@osc.edu * (V): 937.328.5708 * (F): 937.322.7869 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20061023/6d97c4d2/attachment.bin