We have just installed Lustre 2.1.6 on SL6.4 systems. It is working well. However, I find that I am unable to apply root squash parameters. We have separate mgs and mdt machines. Under Lustre 1.8.4 this was not an issue for root squash commands applied on the mdt. However, when I modify the command syntax for lctl conf_param to what I think should now be appropriate, I run into difficulty. [root@lmd02 tools]# lctl conf_param mdt.umt3-MDT0000.nosquash_nids="10.10.2.33@tcp" No device found for name MGS: Invalid argument This command must be run on the MGS. error: conf_param: No such device [root@mgs ~]# lctl conf_param mdt.umt3-MDT0000.nosquash_nids="10.10.2.33@tcp" error: conf_param: Invalid argument I have not yet looked at setting the "root_squash" value, as this problem has stopped me cold. So, two questions: 1. Is this even possible with our split mgs/mdt machines? 2. If possible, what have I done wrong above? Thanks, bob
Greetings- Indiana University is looking for a couple of Lustre savvy administrators to help manage and maintain the new Data Capacitor as part of the High Performance File Systems team. You would be working from the Cyberinfrastructure Building in Bloomington. Jobs are posted here: https://jobs.iu.edu/joblisting/index.cfm?jlnum=8796 https://jobs.iu.edu/joblisting/index.cfm?jlnum=8771 If you are interested, you need to apply using the IU Jobs Online Application. If you have questions any questions, please feel free to drop me a line or give me a call. Thanks for your kind attention. Sincerely, Stephen Simms Manager, High Performance File Systems Indiana University 812-855-7211
>From the Ops Manual (and hence not from direct experience), setting the nosquash_nids on the MGS will affect all MDTs -- it is a global setting when applied using conf_param on the MGS. In which case, the command will return an error when you specify an individual MDT using conf_param on the MGS.Instead, specify the file system that one wants to apply the squash rule to: lctl conf_param <fsname>.mdt.nosquash_nids="<nids>" e.g.: lctl conf_param umt3.mdt.nosquash_nids="10.10.2.33@tcp" To set this per MDT, use mkfs.lustre or tunefs.lustre (refer to the Lustre Operations Manual, section 22.2). Regards, Malcolm.> -----Original Message----- > From: hpdd-discuss-bounces-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org [mailto:hpdd-discuss- > bounces-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org] On Behalf Of Bob Ball > Sent: Saturday, July 20, 2013 12:35 AM > To: hpdd-discuss-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org; Lustre discussion > Subject: [HPDD-discuss] root squash problem > > We have just installed Lustre 2.1.6 on SL6.4 systems. It is working > well. However, I find that I am unable to apply root squash parameters. > > We have separate mgs and mdt machines. Under Lustre 1.8.4 this was > not > an issue for root squash commands applied on the mdt. However, when > I > modify the command syntax for lctl conf_param to what I think should > now > be appropriate, I run into difficulty. > > [root@lmd02 tools]# lctl conf_param > mdt.umt3-MDT0000.nosquash_nids="10.10.2.33@tcp" > No device found for name MGS: Invalid argument > This command must be run on the MGS. > error: conf_param: No such device > > [root@mgs ~]# lctl conf_param > mdt.umt3-MDT0000.nosquash_nids="10.10.2.33@tcp" > error: conf_param: Invalid argument > > I have not yet looked at setting the "root_squash" value, as this > problem has stopped me cold. So, two questions: > > 1. Is this even possible with our split mgs/mdt machines? > 2. If possible, what have I done wrong above? > > Thanks, > bob > > _______________________________________________ > HPDD-discuss mailing list > HPDD-discuss-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org > https://lists.01.org/mailman/listinfo/hpdd-discuss
Thanks, Malcolm. Worked like a charm. Your interpretation was clearly superior to mine. bob On 7/21/2013 6:37 PM, Cowe, Malcolm J wrote:> >From the Ops Manual (and hence not from direct experience), setting the nosquash_nids on the MGS will affect all MDTs -- it is a global setting when applied using conf_param on the MGS. In which case, the command will return an error when you specify an individual MDT using conf_param on the MGS. > > Instead, specify the file system that one wants to apply the squash rule to: > > lctl conf_param <fsname>.mdt.nosquash_nids="<nids>" > > e.g.: > > lctl conf_param umt3.mdt.nosquash_nids="10.10.2.33@tcp" > > To set this per MDT, use mkfs.lustre or tunefs.lustre (refer to the Lustre Operations Manual, section 22.2). > > Regards, > > Malcolm. > > >> -----Original Message----- >> From: hpdd-discuss-bounces-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org [mailto:hpdd-discuss- >> bounces-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org] On Behalf Of Bob Ball >> Sent: Saturday, July 20, 2013 12:35 AM >> To: hpdd-discuss-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org; Lustre discussion >> Subject: [HPDD-discuss] root squash problem >> >> We have just installed Lustre 2.1.6 on SL6.4 systems. It is working >> well. However, I find that I am unable to apply root squash parameters. >> >> We have separate mgs and mdt machines. Under Lustre 1.8.4 this was >> not >> an issue for root squash commands applied on the mdt. However, when >> I >> modify the command syntax for lctl conf_param to what I think should >> now >> be appropriate, I run into difficulty. >> >> [root@lmd02 tools]# lctl conf_param >> mdt.umt3-MDT0000.nosquash_nids="10.10.2.33@tcp" >> No device found for name MGS: Invalid argument >> This command must be run on the MGS. >> error: conf_param: No such device >> >> [root@mgs ~]# lctl conf_param >> mdt.umt3-MDT0000.nosquash_nids="10.10.2.33@tcp" >> error: conf_param: Invalid argument >> >> I have not yet looked at setting the "root_squash" value, as this >> problem has stopped me cold. So, two questions: >> >> 1. Is this even possible with our split mgs/mdt machines? >> 2. If possible, what have I done wrong above? >> >> Thanks, >> bob >> >> _______________________________________________ >> HPDD-discuss mailing list >> HPDD-discuss-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org >> https://lists.01.org/mailman/listinfo/hpdd-discuss