I have added the _netdev option to my fstab entries to prevent the ost and clients from coming online before the network is active. However, in my current configuration, they don''t automatically mount at all. I am testing with CentOS 5 and lustre 1.6.2. Should I need to add my own start script to get these to auto mount the lustre types, or should another service be running that will trigger the netdev fstab entries to mount? Thanks! -- Andrew Lundgren -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20071001/99cb204c/attachment-0002.html
There are references to running fsck on the lustre OSTs after a crash or power failure. However, after downloading the ClusterFS e2fsprogs and building it, e2fsck does not recognize our ldiskfs- based OSTs. Is there a way to fsck the ldiskfs-based OSTs? We are running 1.6.2 on CentOS 4.5 (2.6.9-55ELsmp, x86_64). We built lustre from source. Thanks, Charlie Taylor UF HPC Center
On Mon, 2007-01-10 at 15:57 -0600, Lundgren, Andrew wrote:> I have added the _netdev option to my fstab entries to prevent the ost > and clients from coming online before the network is active. However, > in my current configuration, they don''t automatically mount at all. > > I am testing with CentOS 5 and lustre 1.6.2.Interesting. I think on centos this should still work. The /etc/init.d/netfs script is what (historicallY) is supposed to mount _netdev filesystems.> Should I need to add my own start script to get these to auto mount > the lustre types, or should another service be running that will > trigger the netdev fstab entries to mount?The netfs initscript is supposed to be doing that. I have been coming to the discovery that SLES 10, however has ceased shipping that script. There is an open bug in bugzilla regarding this issue. To our knowledge however, this situation only exists on SLES 10. b.
On Oct 01, 2007 18:16 -0400, Charles Taylor wrote:> There are references to running fsck on the lustre OSTs after a crash > or power failure. However, after downloading the ClusterFS > e2fsprogs and building it, e2fsck does not recognize our ldiskfs- > based OSTs. Is there a way to fsck the ldiskfs-based OSTs?You should use e2fsprogs-1.40.2-cfs1. That will handle ldiskfs OSTs. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc.
Thank you! I will make sure it is present. If not, I will get it and add it.> -----Original Message----- > From: lustre-discuss-bounces at clusterfs.com > [mailto:lustre-discuss-bounces at clusterfs.com] On Behalf Of > Brian J. Murrell > Sent: Monday, October 01, 2007 10:49 PM > To: Lustre-discuss at clusterfs.com > Subject: Re: [Lustre-discuss] _netdev not mounting.. > > On Mon, 2007-01-10 at 15:57 -0600, Lundgren, Andrew wrote: > > I have added the _netdev option to my fstab entries to > prevent the ost > > and clients from coming online before the network is > active. However, > > in my current configuration, they don''t automatically mount at all. > > > > I am testing with CentOS 5 and lustre 1.6.2. > > Interesting. I think on centos this should still work. > The /etc/init.d/netfs script is what (historicallY) is > supposed to mount _netdev filesystems. > > > Should I need to add my own start script to get these to > auto mount > > the lustre types, or should another service be running that will > > trigger the netdev fstab entries to mount? > > The netfs initscript is supposed to be doing that. I have > been coming to the discovery that SLES 10, however has ceased > shipping that script. > There is an open bug in bugzilla regarding this issue. To > our knowledge however, this situation only exists on SLES 10. > > b. > > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >
It is present in CentOS, it was just not being run. I added it to the run control scripts and it is functioning correctly.> -----Original Message----- > From: lustre-discuss-bounces at clusterfs.com > [mailto:lustre-discuss-bounces at clusterfs.com] On Behalf Of > Lundgren, Andrew > Sent: Tuesday, October 02, 2007 9:16 AM > To: Brian J. Murrell; Lustre-discuss at clusterfs.com > Subject: Re: [Lustre-discuss] _netdev not mounting.. > > Thank you! I will make sure it is present. If not, I will > get it and add it. > > > -----Original Message----- > > From: lustre-discuss-bounces at clusterfs.com > > [mailto:lustre-discuss-bounces at clusterfs.com] On Behalf Of Brian J. > > Murrell > > Sent: Monday, October 01, 2007 10:49 PM > > To: Lustre-discuss at clusterfs.com > > Subject: Re: [Lustre-discuss] _netdev not mounting.. > > > > On Mon, 2007-01-10 at 15:57 -0600, Lundgren, Andrew wrote: > > > I have added the _netdev option to my fstab entries to > > prevent the ost > > > and clients from coming online before the network is > > active. However, > > > in my current configuration, they don''t automatically > mount at all. > > > > > > I am testing with CentOS 5 and lustre 1.6.2. > > > > Interesting. I think on centos this should still work. > > The /etc/init.d/netfs script is what (historicallY) is supposed to > > mount _netdev filesystems. > > > > > Should I need to add my own start script to get these to > > auto mount > > > the lustre types, or should another service be running that will > > > trigger the netdev fstab entries to mount? > > > > The netfs initscript is supposed to be doing that. I have > been coming > > to the discovery that SLES 10, however has ceased shipping that > > script. > > There is an open bug in bugzilla regarding this issue. To our > > knowledge however, this situation only exists on SLES 10. > > > > b. > > > > > > _______________________________________________ > > Lustre-discuss mailing list > > Lustre-discuss at clusterfs.com > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >