Hi, I am trying to install Lustre on a single node, behaving as all 3. I am trying to get this done from the source code of Lustre. However, when I try to start the lustre $ lconf --reformat local.xml I get the following error: ......... ......... MDSDEV:mds-test mds-test_UUID /tmp/mds-test ldiskfs 50000 yes MDS mount options: errors=remount-ro LustreError: 15565:0:(lvfs.h:130:ll_lookup_one_len()) bad inode returned 14/529619894 LustreError: 15565:0:(mds_fs.c:472:mds_fs_setup()) cannot create LOGS directory: rc = -2 LustreError: 15565:0:(handler.c:1878:mds_setup()) mds-test: MDS filesystem method init failed: rc=-2 LustreError: 15567:0:(obd_config.c:288:class_cleanup()) Device 4 not setup MDS failed to start. Check the syslog for details. (May need to run lconf --write-conf) I am using the default configuration and setup as given in the Lustre manual. Could someone highlight what the problem is? Thanks. RR -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20061103/91140687/attachment.html
Hi, I am stuck at the same point too! Here is my log. Nov 6 10:17:43 cluster3 kernel: Lustre: 3942:0:(module.c:382:init_libcfs_module()) maximum lustre stack 8192 Nov 6 10:17:47 cluster3 kernel: Lustre: Added LNI 192.168.1.103@tcp [8/256] Nov 6 10:17:47 cluster3 kernel: Lustre: Accept secure, port 988 Nov 6 10:18:01 cluster3 kernel: Lustre: OBD class driver Build Version: 1.4.7-19691231190000-PRISTINE-.usr.src.linux-2.6.12.6, info@clusterfs.com Nov 6 10:18:01 cluster3 kernel: Lustre: Filtering OBD driver; info@clusterfs.com Nov 6 10:18:02 cluster3 kernel: Lustre: Lustre Lite Client File System; info@clusterfs.com Nov 6 10:18:02 cluster3 kernel: kjournald starting. Commit interval 5 seconds Nov 6 10:18:02 cluster3 kernel: LDISKFS FS on loop0, internal journal Nov 6 10:18:02 cluster3 kernel: LDISKFS-fs: mounted filesystem with ordered data mode. Nov 6 10:18:02 cluster3 kernel: Lustre: 4186:0:(filter.c:574:filter_init_server_data()) ost1-test: initializing new last_rcvd Nov 6 10:18:02 cluster3 kernel: Lustre: OST ost1-test now serving /dev/loop0 (ee8dee13-12db-4e06-8009-a5c7192d5956) with recovery enabled Nov 6 10:18:03 cluster3 kernel: kjournald starting. Commit interval 5 seconds Nov 6 10:18:03 cluster3 kernel: LDISKFS FS on loop1, internal journal Nov 6 10:18:03 cluster3 kernel: LDISKFS-fs: mounted filesystem with ordered data mode. Nov 6 10:18:03 cluster3 kernel: Lustre: 4383:0:(filter.c:574:filter_init_server_data()) ost2-test: initializing new last_rcvd Nov 6 10:18:03 cluster3 kernel: Lustre: OST ost2-test now serving /dev/loop1 (c1ae26ee-23ca-4a56-8475-10e9373d9875) with recovery enabled Nov 6 10:18:04 cluster3 kernel: kjournald starting. Commit interval 5 seconds Nov 6 10:18:04 cluster3 kernel: LDISKFS FS on loop2, internal journal Nov 6 10:18:04 cluster3 kernel: LDISKFS-fs: mounted filesystem with ordered data mode. Nov 6 10:18:04 cluster3 kernel: Lustre: 4417:0:(mds_fs.c:239:mds_init_server_data()) mds-test: initializing new last_rcvd Nov 6 10:18:04 cluster3 kernel: Lustre: MDT mds-test now serving /dev/loop2 (a110e01c-14b9-4cd6-b61a-3c39f231785f) with recovery enabled Nov 6 10:18:04 cluster3 kernel: Lustre: MDT mds-test has stopped. Nov 6 10:18:05 cluster3 kernel: kjournald starting. Commit interval 5 seconds Nov 6 10:18:05 cluster3 kernel: LDISKFS FS on loop2, internal journal Nov 6 10:18:05 cluster3 kernel: LDISKFS-fs: mounted filesystem with ordered data mode. Nov 6 10:18:05 cluster3 kernel: LustreError: 4586:0:(lvfs.h:130:ll_lookup_one_len()) bad inode returned 14032/1995494705 Nov 6 10:18:05 cluster3 kernel: LustreError: 4586:0:(mds_fs.c:472:mds_fs_setup()) cannot create LOGS directory: rc = -2 Nov 6 10:18:05 cluster3 kernel: LustreError: 4586:0:(handler.c:1878:mds_setup()) mds-test: MDS filesystem method init failed: rc = -2 Nov 6 10:18:05 cluster3 kernel: LustreError: 4588:0:(obd_config.c:288:class_cleanup()) Device 4 not setup I had tried putting some extra logs and it appears that lustre fails when trying to enter the ROOT directory of probably the loop-mounted MDS. That''s when it says bad inode. But I am unable to figure out the reason why that is happening. Any idea what could be wrong? A similar issue was noticed before... http://www.archivesat.com/ClusterFS/thread445676.htm Thanks in advance... regards, Sumit On 11/3/06, Ronald Rivest <rrivest@gmail.com> wrote:> > Hi, > > I am trying to install Lustre on a single node, behaving as all 3. I am > trying to get this done from the source code of Lustre. However, when I try > to start the lustre > > $ lconf --reformat local.xml > > I get the following error: > > ......... > ......... > MDSDEV:mds-test mds-test_UUID /tmp/mds-test ldiskfs 50000 yes > MDS mount options: errors=remount-ro > LustreError: 15565:0:(lvfs.h:130:ll_lookup_one_len()) bad inode returned > 14/529619894 > LustreError: 15565:0:(mds_fs.c:472:mds_fs_setup()) cannot create LOGS > directory: rc = -2 > LustreError: 15565:0:(handler.c:1878:mds_setup()) mds-test: MDS filesystem > method init failed: rc=-2 > LustreError: 15567:0:(obd_config.c:288:class_cleanup()) Device 4 not setup > MDS failed to start. Check the syslog for details. (May need to run lconf > --write-conf) > > I am using the default configuration and setup as given in the Lustre > manual. > > Could someone highlight what the problem is? > > Thanks. > RR > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20061106/bcbdb3fc/attachment.html
What version of Lustre are you all using? Sumit Narayan wrote:> Hi, > > I am stuck at the same point too! Here is my log. > > Nov 6 10:17:43 cluster3 kernel: Lustre: > 3942:0:(module.c:382:init_libcfs_module()) maximum lustre stack 8192 > Nov 6 10:17:47 cluster3 kernel: Lustre: Added LNI 192.168.1.103@tcp > <mailto:192.168.1.103@tcp> [8/256] > Nov 6 10:17:47 cluster3 kernel: Lustre: Accept secure, port 988 > Nov 6 10:18:01 cluster3 kernel: Lustre: OBD class driver Build > Version: 1.4.7-19691231190000-PRISTINE-.usr.src.linux-2.6.12.6, > info@clusterfs.com <mailto:info@clusterfs.com> > Nov 6 10:18:01 cluster3 kernel: Lustre: Filtering OBD driver; > info@clusterfs.com <mailto:info@clusterfs.com> > Nov 6 10:18:02 cluster3 kernel: Lustre: Lustre Lite Client File > System; info@clusterfs.com <mailto:info@clusterfs.com> > Nov 6 10:18:02 cluster3 kernel: kjournald starting. Commit interval > 5 seconds > Nov 6 10:18:02 cluster3 kernel: LDISKFS FS on loop0, internal journal > Nov 6 10:18:02 cluster3 kernel: LDISKFS-fs: mounted filesystem with > ordered data mode. > Nov 6 10:18:02 cluster3 kernel: Lustre: > 4186:0:(filter.c:574:filter_init_server_data()) ost1-test: > initializing new last_rcvd > Nov 6 10:18:02 cluster3 kernel: Lustre: OST ost1-test now serving > /dev/loop0 (ee8dee13-12db-4e06-8009-a5c7192d5956) with recovery enabled > Nov 6 10:18:03 cluster3 kernel: kjournald starting. Commit interval > 5 seconds > Nov 6 10:18:03 cluster3 kernel: LDISKFS FS on loop1, internal journal > Nov 6 10:18:03 cluster3 kernel: LDISKFS-fs: mounted filesystem with > ordered data mode. > Nov 6 10:18:03 cluster3 kernel: Lustre: > 4383:0:(filter.c:574:filter_init_server_data()) ost2-test: > initializing new last_rcvd > Nov 6 10:18:03 cluster3 kernel: Lustre: OST ost2-test now serving > /dev/loop1 (c1ae26ee-23ca-4a56-8475-10e9373d9875) with recovery enabled > Nov 6 10:18:04 cluster3 kernel: kjournald starting. Commit interval > 5 seconds > Nov 6 10:18:04 cluster3 kernel: LDISKFS FS on loop2, internal journal > Nov 6 10:18:04 cluster3 kernel: LDISKFS-fs: mounted filesystem with > ordered data mode. > Nov 6 10:18:04 cluster3 kernel: Lustre: > 4417:0:(mds_fs.c:239:mds_init_server_data()) mds-test: initializing > new last_rcvd > Nov 6 10:18:04 cluster3 kernel: Lustre: MDT mds-test now serving > /dev/loop2 (a110e01c-14b9-4cd6-b61a-3c39f231785f) with recovery enabled > Nov 6 10:18:04 cluster3 kernel: Lustre: MDT mds-test has stopped. > Nov 6 10:18:05 cluster3 kernel: kjournald starting. Commit interval > 5 seconds > Nov 6 10:18:05 cluster3 kernel: LDISKFS FS on loop2, internal journal > Nov 6 10:18:05 cluster3 kernel: LDISKFS-fs: mounted filesystem with > ordered data mode. > Nov 6 10:18:05 cluster3 kernel: LustreError: > 4586:0:(lvfs.h:130:ll_lookup_one_len()) bad inode returned > 14032/1995494705 > Nov 6 10:18:05 cluster3 kernel: LustreError: > 4586:0:(mds_fs.c:472:mds_fs_setup()) cannot create LOGS directory: rc > = -2 > Nov 6 10:18:05 cluster3 kernel: LustreError: > 4586:0:(handler.c:1878:mds_setup()) mds-test: MDS filesystem method > init failed: rc = -2 > Nov 6 10:18:05 cluster3 kernel: LustreError: > 4588:0:(obd_config.c:288:class_cleanup()) Device 4 not setup > > I had tried putting some extra logs and it appears that lustre fails > when trying to enter the ROOT directory of probably the loop-mounted > MDS. That''s when it says bad inode. But I am unable to figure out the > reason why that is happening. Any idea what could be wrong? > > A similar issue was noticed before... > http://www.archivesat.com/ClusterFS/thread445676.htm > > Thanks in advance... > > regards, > Sumit > > > On 11/3/06, *Ronald Rivest* <rrivest@gmail.com > <mailto:rrivest@gmail.com>> wrote: > > Hi, > > I am trying to install Lustre on a single node, behaving as all 3. > I am trying to get this done from the source code of Lustre. > However, when I try to start the lustre > > $ lconf --reformat local.xml > > I get the following error: > > ......... > ......... > MDSDEV:mds-test mds-test_UUID /tmp/mds-test ldiskfs 50000 yes > MDS mount options: errors=remount-ro > LustreError: 15565:0:(lvfs.h:130:ll_lookup_one_len()) bad inode > returned 14/529619894 > LustreError: 15565:0:(mds_fs.c:472:mds_fs_setup()) cannot create > LOGS directory: rc = -2 > LustreError: 15565:0:(handler.c:1878:mds_setup()) mds-test: MDS > filesystem method init failed: rc=-2 > LustreError: 15567:0:(obd_config.c:288:class_cleanup()) Device 4 > not setup > MDS failed to start. Check the syslog for details. (May need to > run lconf --write-conf) > > I am using the default configuration and setup as given in the > Lustre manual. > > Could someone highlight what the problem is? > > Thanks. > RR > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com <mailto:Lustre-discuss@clusterfs.com> > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > > > ------------------------------------------------------------------------ > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >
I had this on both, 1.4.7 and 1.4.7.1 with kernel 2.6.12.6 from kernel.org. I applied 2.6.12-vanilla series patch on it. On 11/6/06, Nathaniel Rutman <nathan@clusterfs.com> wrote:> > What version of Lustre are you all using? > > Sumit Narayan wrote: > > Hi, > > > > I am stuck at the same point too! Here is my log. > > > > Nov 6 10:17:43 cluster3 kernel: Lustre: > > 3942:0:(module.c:382:init_libcfs_module()) maximum lustre stack 8192 > > Nov 6 10:17:47 cluster3 kernel: Lustre: Added LNI 192.168.1.103@tcp > > <mailto:192.168.1.103@tcp> [8/256] > > Nov 6 10:17:47 cluster3 kernel: Lustre: Accept secure, port 988 > > Nov 6 10:18:01 cluster3 kernel: Lustre: OBD class driver Build > > Version: 1.4.7-19691231190000-PRISTINE-.usr.src.linux-2.6.12.6, > > info@clusterfs.com <mailto:info@clusterfs.com> > > Nov 6 10:18:01 cluster3 kernel: Lustre: Filtering OBD driver; > > info@clusterfs.com <mailto:info@clusterfs.com> > > Nov 6 10:18:02 cluster3 kernel: Lustre: Lustre Lite Client File > > System; info@clusterfs.com <mailto:info@clusterfs.com> > > Nov 6 10:18:02 cluster3 kernel: kjournald starting. Commit interval > > 5 seconds > > Nov 6 10:18:02 cluster3 kernel: LDISKFS FS on loop0, internal journal > > Nov 6 10:18:02 cluster3 kernel: LDISKFS-fs: mounted filesystem with > > ordered data mode. > > Nov 6 10:18:02 cluster3 kernel: Lustre: > > 4186:0:(filter.c:574:filter_init_server_data()) ost1-test: > > initializing new last_rcvd > > Nov 6 10:18:02 cluster3 kernel: Lustre: OST ost1-test now serving > > /dev/loop0 (ee8dee13-12db-4e06-8009-a5c7192d5956) with recovery enabled > > Nov 6 10:18:03 cluster3 kernel: kjournald starting. Commit interval > > 5 seconds > > Nov 6 10:18:03 cluster3 kernel: LDISKFS FS on loop1, internal journal > > Nov 6 10:18:03 cluster3 kernel: LDISKFS-fs: mounted filesystem with > > ordered data mode. > > Nov 6 10:18:03 cluster3 kernel: Lustre: > > 4383:0:(filter.c:574:filter_init_server_data()) ost2-test: > > initializing new last_rcvd > > Nov 6 10:18:03 cluster3 kernel: Lustre: OST ost2-test now serving > > /dev/loop1 (c1ae26ee-23ca-4a56-8475-10e9373d9875) with recovery enabled > > Nov 6 10:18:04 cluster3 kernel: kjournald starting. Commit interval > > 5 seconds > > Nov 6 10:18:04 cluster3 kernel: LDISKFS FS on loop2, internal journal > > Nov 6 10:18:04 cluster3 kernel: LDISKFS-fs: mounted filesystem with > > ordered data mode. > > Nov 6 10:18:04 cluster3 kernel: Lustre: > > 4417:0:(mds_fs.c:239:mds_init_server_data()) mds-test: initializing > > new last_rcvd > > Nov 6 10:18:04 cluster3 kernel: Lustre: MDT mds-test now serving > > /dev/loop2 (a110e01c-14b9-4cd6-b61a-3c39f231785f) with recovery enabled > > Nov 6 10:18:04 cluster3 kernel: Lustre: MDT mds-test has stopped. > > Nov 6 10:18:05 cluster3 kernel: kjournald starting. Commit interval > > 5 seconds > > Nov 6 10:18:05 cluster3 kernel: LDISKFS FS on loop2, internal journal > > Nov 6 10:18:05 cluster3 kernel: LDISKFS-fs: mounted filesystem with > > ordered data mode. > > Nov 6 10:18:05 cluster3 kernel: LustreError: > > 4586:0:(lvfs.h:130:ll_lookup_one_len()) bad inode returned > > 14032/1995494705 > > Nov 6 10:18:05 cluster3 kernel: LustreError: > > 4586:0:(mds_fs.c:472:mds_fs_setup()) cannot create LOGS directory: rc > > = -2 > > Nov 6 10:18:05 cluster3 kernel: LustreError: > > 4586:0:(handler.c:1878:mds_setup()) mds-test: MDS filesystem method > > init failed: rc = -2 > > Nov 6 10:18:05 cluster3 kernel: LustreError: > > 4588:0:(obd_config.c:288:class_cleanup()) Device 4 not setup > > > > I had tried putting some extra logs and it appears that lustre fails > > when trying to enter the ROOT directory of probably the loop-mounted > > MDS. That''s when it says bad inode. But I am unable to figure out the > > reason why that is happening. Any idea what could be wrong? > > > > A similar issue was noticed before... > > http://www.archivesat.com/ClusterFS/thread445676.htm > > > > Thanks in advance... > > > > regards, > > Sumit > > > > > > On 11/3/06, *Ronald Rivest* <rrivest@gmail.com > > <mailto:rrivest@gmail.com>> wrote: > > > > Hi, > > > > I am trying to install Lustre on a single node, behaving as all 3. > > I am trying to get this done from the source code of Lustre. > > However, when I try to start the lustre > > > > $ lconf --reformat local.xml > > > > I get the following error: > > > > ......... > > ......... > > MDSDEV:mds-test mds-test_UUID /tmp/mds-test ldiskfs 50000 yes > > MDS mount options: errors=remount-ro > > LustreError: 15565:0:(lvfs.h:130:ll_lookup_one_len()) bad inode > > returned 14/529619894 > > LustreError: 15565:0:(mds_fs.c:472:mds_fs_setup()) cannot create > > LOGS directory: rc = -2 > > LustreError: 15565:0:(handler.c:1878:mds_setup()) mds-test: MDS > > filesystem method init failed: rc=-2 > > LustreError: 15567:0:(obd_config.c:288:class_cleanup()) Device 4 > > not setup > > MDS failed to start. Check the syslog for details. (May need to > > run lconf --write-conf) > > > > I am using the default configuration and setup as given in the > > Lustre manual. > > > > Could someone highlight what the problem is? > > > > Thanks. > > RR > > > > _______________________________________________ > > Lustre-discuss mailing list > > Lustre-discuss@clusterfs.com <mailto:Lustre-discuss@clusterfs.com> > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > Lustre-discuss mailing list > > Lustre-discuss@clusterfs.com > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20061106/aa1572ab/attachment-0001.html
I got this on 1.4.7.1. Apparently this does not happen when done with RPMs downloaded from Lustre website. Could this be a kernel config mistake? On 11/6/06, Sumit Narayan <narsumit@gmail.com> wrote:> > I had this on both, 1.4.7 and 1.4.7.1 with kernel 2.6.12.6 from kernel.org. > I applied 2.6.12-vanilla series patch on it. > > On 11/6/06, Nathaniel Rutman <nathan@clusterfs.com> wrote: > > > > What version of Lustre are you all using? > > > > Sumit Narayan wrote: > > > Hi, > > > > > > I am stuck at the same point too! Here is my log. > > > > > > Nov 6 10:17:43 cluster3 kernel: Lustre: > > > 3942:0:(module.c:382:init_libcfs_module()) maximum lustre stack 8192 > > > Nov 6 10:17:47 cluster3 kernel: Lustre: Added LNI 192.168.1.103@tcp > > > <mailto: 192.168.1.103@tcp> [8/256] > > > Nov 6 10:17:47 cluster3 kernel: Lustre: Accept secure, port 988 > > > Nov 6 10:18:01 cluster3 kernel: Lustre: OBD class driver Build > > > Version: 1.4.7-19691231190000-PRISTINE-.usr.src.linux-2.6.12.6, > > > info@clusterfs.com <mailto:info@clusterfs.com> > > > Nov 6 10:18:01 cluster3 kernel: Lustre: Filtering OBD driver; > > > info@clusterfs.com <mailto:info@clusterfs.com> > > > Nov 6 10:18:02 cluster3 kernel: Lustre: Lustre Lite Client File > > > System; info@clusterfs.com <mailto:info@clusterfs.com> > > > Nov 6 10:18:02 cluster3 kernel: kjournald starting. Commit interval > > > 5 seconds > > > Nov 6 10:18:02 cluster3 kernel: LDISKFS FS on loop0, internal journal > > > Nov 6 10:18:02 cluster3 kernel: LDISKFS-fs: mounted filesystem with > > > ordered data mode. > > > Nov 6 10:18:02 cluster3 kernel: Lustre: > > > 4186:0:(filter.c:574:filter_init_server_data()) ost1-test: > > > initializing new last_rcvd > > > Nov 6 10:18:02 cluster3 kernel: Lustre: OST ost1-test now serving > > > /dev/loop0 (ee8dee13-12db-4e06-8009-a5c7192d5956) with recovery > > enabled > > > Nov 6 10:18:03 cluster3 kernel: kjournald starting. Commit interval > > > 5 seconds > > > Nov 6 10:18:03 cluster3 kernel: LDISKFS FS on loop1, internal journal > > > Nov 6 10:18:03 cluster3 kernel: LDISKFS-fs: mounted filesystem with > > > ordered data mode. > > > Nov 6 10:18:03 cluster3 kernel: Lustre: > > > 4383:0:(filter.c:574:filter_init_server_data()) ost2-test: > > > initializing new last_rcvd > > > Nov 6 10:18:03 cluster3 kernel: Lustre: OST ost2-test now serving > > > /dev/loop1 (c1ae26ee-23ca-4a56-8475-10e9373d9875) with recovery > > enabled > > > Nov 6 10:18:04 cluster3 kernel: kjournald starting. Commit interval > > > 5 seconds > > > Nov 6 10:18:04 cluster3 kernel: LDISKFS FS on loop2, internal journal > > > > > Nov 6 10:18:04 cluster3 kernel: LDISKFS-fs: mounted filesystem with > > > ordered data mode. > > > Nov 6 10:18:04 cluster3 kernel: Lustre: > > > 4417:0:(mds_fs.c:239:mds_init_server_data()) mds-test: initializing > > > new last_rcvd > > > Nov 6 10:18:04 cluster3 kernel: Lustre: MDT mds-test now serving > > > /dev/loop2 (a110e01c-14b9-4cd6-b61a-3c39f231785f) with recovery > > enabled > > > Nov 6 10:18:04 cluster3 kernel: Lustre: MDT mds-test has stopped. > > > Nov 6 10:18:05 cluster3 kernel: kjournald starting. Commit interval > > > 5 seconds > > > Nov 6 10:18:05 cluster3 kernel: LDISKFS FS on loop2, internal journal > > > Nov 6 10:18:05 cluster3 kernel: LDISKFS-fs: mounted filesystem with > > > ordered data mode. > > > Nov 6 10:18:05 cluster3 kernel: LustreError: > > > 4586:0:(lvfs.h:130:ll_lookup_one_len()) bad inode returned > > > 14032/1995494705 > > > Nov 6 10:18:05 cluster3 kernel: LustreError: > > > 4586:0:(mds_fs.c:472:mds_fs_setup()) cannot create LOGS directory: rc > > > = -2 > > > Nov 6 10:18:05 cluster3 kernel: LustreError: > > > 4586:0:(handler.c:1878:mds_setup()) mds-test: MDS filesystem method > > > init failed: rc = -2 > > > Nov 6 10:18:05 cluster3 kernel: LustreError: > > > 4588:0:(obd_config.c:288:class_cleanup()) Device 4 not setup > > > > > > I had tried putting some extra logs and it appears that lustre fails > > > when trying to enter the ROOT directory of probably the loop-mounted > > > MDS. That''s when it says bad inode. But I am unable to figure out the > > > reason why that is happening. Any idea what could be wrong? > > > > > > A similar issue was noticed before... > > > http://www.archivesat.com/ClusterFS/thread445676.htm > > > > > > Thanks in advance... > > > > > > regards, > > > Sumit > > > > > > > > > On 11/3/06, *Ronald Rivest* <rrivest@gmail.com > > > <mailto:rrivest@gmail.com >> wrote: > > > > > > Hi, > > > > > > I am trying to install Lustre on a single node, behaving as all 3. > > > I am trying to get this done from the source code of Lustre. > > > However, when I try to start the lustre > > > > > > $ lconf --reformat local.xml > > > > > > I get the following error: > > > > > > ......... > > > ......... > > > MDSDEV:mds-test mds-test_UUID /tmp/mds-test ldiskfs 50000 yes > > > MDS mount options: errors=remount-ro > > > LustreError: 15565:0:(lvfs.h:130:ll_lookup_one_len()) bad inode > > > returned 14/529619894 > > > LustreError: 15565:0:(mds_fs.c:472:mds_fs_setup()) cannot create > > > LOGS directory: rc = -2 > > > LustreError: 15565:0:(handler.c:1878:mds_setup()) mds-test: MDS > > > filesystem method init failed: rc=-2 > > > LustreError: 15567:0:(obd_config.c:288:class_cleanup()) Device 4 > > > not setup > > > MDS failed to start. Check the syslog for details. (May need to > > > run lconf --write-conf) > > > > > > I am using the default configuration and setup as given in the > > > Lustre manual. > > > > > > Could someone highlight what the problem is? > > > > > > Thanks. > > > RR > > > > > > _______________________________________________ > > > Lustre-discuss mailing list > > > Lustre-discuss@clusterfs.com <mailto:Lustre-discuss@clusterfs.com> > > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > > > > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > _______________________________________________ > > > Lustre-discuss mailing list > > > Lustre-discuss@clusterfs.com > > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > > > > > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20061106/15ce5ffb/attachment.html