Hi there, I have just started a new position with a company that has a nice 20 node cluster. This cluster is a nice setup, except they have not got any shared storage setup, they did have NFS configured but the read times they were getting for processing data was slowing the cluster down considerably. After looking around I have found lustre and thought I would configure it on a spare node and see what kind of results I can get from it. So I have a system running Redhat Enterprise V 4 and have downloaded the the pre built RPM''s from the following area on clusterfs.com http://www.clusterfs.com/downloads/public/Lustre/v1.6/Production/latest/ rhel-2.6-i686/ I''ve installed the new kernel and modified the grub.conf file so it is being booted up. Also I have installed Kernel-lustre-source Lustre-1.6.3-... Lustre-debuginfo Lustre-ldiskfs Lustre-modules Lustre source Now at the moment I have a second 70Gb sata drive on the system I would like to use as the lustre drive, so following the instructions I have used the following command Mkfs.lustre -mdt -mgs /dev/sdb Which shows up all the right messages and is successful, I then mount the drive Mount -t lustre /dev/sdb /test_share which successfully mounts and I can look at "cat /proc/fs/lustre/devices" and can see all the right information. Now the thing is if I try and write to this filesystem I get an error Not a directory ???? So I''m kinda lost. What am I doing wrong ??????? If I mount the filesystem with a standard command Mount /dev/sdb /test_share I can see the following files and directories listed [root at tester /]# ls /test_share CONFIGS last_rcvd lost+found OBJECTS ROOT health_check LOGS lov_objid PENDING Help. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ SCRI, Invergowrie, Dundee, DD2 5DA. The Scottish Crop Research Institute is a charitable company limited by guarantee. Registered in Scotland No: SC 29367. Recognised by the Inland Revenue as a Scottish Charity No: SC 006662. DISCLAIMER: This email is from the Scottish Crop Research Institute, but the views expressed by the sender are not necessarily the views of SCRI and its subsidiaries. This email and any files transmitted with it are confidential to the intended recipient at the e-mail address to which it has been addressed. It may not be disclosed or used by any other than that addressee. If you are not the intended recipient you are requested to preserve this confidentiality and you must not use, disclose, copy, print or rely on this e-mail in any way. Please notify postmaster at scri.ac.uk quoting the name of the sender and delete the email from your system. Although SCRI has taken reasonable precautions to ensure no viruses are present in this email, neither the Institute nor the sender accepts any responsibility for any viruses, and it is your responsibility to scan the email and the attachments (if any). -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20071025/a2037a1a/attachment-0002.html
Check http://manual.lustre.org When you mount the mds/mgs with -t lustre it activates the server. You will also need 1 or more OSS (mkfs.lustre --ost ) You will then need to also mount it Then on the client you mount like nfs mount -t lustre mgshost:/lustre /test_share/ All the lustre information is built on top of ext3+patches, but you can not access it directly (unless you mount without -t lustre which mounts it ext3) This is all spelled out in the manual. Brock Palen Center for Advanced Computing brockp at umich.edu (734)936-1985 On Oct 25, 2007, at 9:50 AM, Iain Grant wrote:> Hi there, > > > > I have just started a new position with a company that has a nice > 20 node cluster. > > > > This cluster is a nice setup, except they have not got any shared > storage setup, they did have NFS configured but the read times they > were getting for processing data was slowing the cluster down > considerably. > > After looking around I have found lustre and thought I would > configure it on a spare node and see what kind of results I can get > from it. > > > > So I have a system running Redhat Enterprise V 4 and have > downloaded the the pre built RPM?s from the following area on > clusterfs.com > > > > http://www.clusterfs.com/downloads/public/Lustre/v1.6/Production/ > latest/rhel-2.6-i686/ > > > > I?ve installed the new kernel and modified the grub.conf file so it > is being booted up. > > Also I have installed > > > > Kernel-lustre-source > > Lustre-1.6.3-? > > Lustre-debuginfo > > Lustre-ldiskfs > > Lustre-modules > > Lustre source > > > > Now at the moment I have a second 70Gb sata drive on the system I > would like to use as the lustre drive, so following the > instructions I have used the following command > > > > Mkfs.lustre ?mdt ?mgs /dev/sdb > > > > Which shows up all the right messages and is successful, > > I then mount the drive > > > > Mount ?t lustre /dev/sdb /test_share > > > > which successfully mounts and I can look at ?cat /proc/fs/lustre/ > devices? and can see all the right information. > > > > Now the thing is if I try and write to this filesystem I get an error > > > > Not a directory ???? > > > > So I?m kinda lost. What am I doing wrong ??????? > > > > If I mount the filesystem with a standard command > > > > Mount /dev/sdb /test_share > > > > I can see the following files and directories listed > > > > [root at tester /]# ls /test_share > > CONFIGS last_rcvd lost+found OBJECTS ROOT > > health_check LOGS lov_objid PENDING > > > > Help. > > > > > > > > > > > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ _ _ _ > > SCRI, Invergowrie, Dundee, DD2 5DA. > The Scottish Crop Research Institute is a charitable company > limited by guarantee. > Registered in Scotland No: SC 29367. > Recognised by the Inland Revenue as a Scottish Charity No: SC 006662. > > > DISCLAIMER: > > This email is from the Scottish Crop Research Institute, but the views > expressed by the sender are not necessarily the views of SCRI and its > subsidiaries. This email and any files transmitted with it are > confidential > to the intended recipient at the e-mail address to which it has been > addressed. It may not be disclosed or used by any other than that > addressee. > If you are not the intended recipient you are requested to preserve > this > confidentiality and you must not use, disclose, copy, print or rely > on this > e-mail in any way. Please notify postmaster at scri.ac.uk quoting the > name of the sender and delete the email from your system. > > Although SCRI has taken reasonable precautions to ensure no viruses > are > present in this email, neither the Institute nor the sender accepts > any > responsibility for any viruses, and it is your responsibility to > scan the email > and the attachments (if any). > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss-------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20071025/5ccc6a88/attachment-0002.html
Sorry folks I''m still not getting any life from this. I''ve followed the manual and the following steps Module options for networking should first be set up in /etc/modprobe.conf, e.g. # Networking options, see /sys/module/lnet/parameters options lnet networks=tcp # end Lustre modules [edit <http://wiki.lustre.org/index.php?title=Mount_Conf&action=edit§ion=6> ]Making and starting a filesystem combo MDT/MGS on my single node'' mkfs.lustre --fsname=testfs --mdt --mgs /dev/sda1 mkdir -p /mnt/test/mdt mount -t lustre /dev/sda1 /mnt/test/mdt cat /proc/fs/lustre/devices 0 UP mgs MGS MGS 5 1 UP mgc MGC143.234.96.46 at tcp 303242f4-5aa3-5377-4895-90a397d56153 5 2 UP mdt MDS MDS_uuid 3 3 UP lov testfs-mdtlov testfs-mdtlov_UUID 4 4 UP mds testfs-MDT0000 testfs-MDT0000_UUID 3 Then I configured OST on the same node mkfs.lustre --fsname=testfs --ost --mgsnode=143.234.96.46 at tcp0 /dev/sda2 mkdir -p /mnt/test/ost0 mount -t lustre /dev/sda2 /mnt/test/ost0 when I cd to /mnt/test/ost0 I get Not a directory messages ??? If I do a df I can see the filesystems mounted. When I look at /var/log/messages I can see Oct 26 09:33:40 fraggle kernel: Lustre: Filtering OBD driver; info at clusterfs.com Oct 26 09:33:40 fraggle kernel: Lustre: testfs-OST0000: new disk, initializing Oct 26 09:33:41 fraggle kernel: Lustre: OST testfs-OST0000 now serving dev (testfs-OST0000/01493618-db27-ba73-4d41-6ab062fa5355) with recovery enabled Oct 26 09:33:41 fraggle kernel: Lustre: Server testfs-OST0000 on device /dev/sdb2 has started Oct 26 09:33:43 fraggle kernel: Lustre: testfs-OST0000: received MDS connection from 0 at lo Oct 26 09:33:43 fraggle kernel: Lustre: MDS testfs-MDT0000: testfs-OST0000_UUID now active, resetting orphans ???? This is driving me nuts ??? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ SCRI, Invergowrie, Dundee, DD2 5DA. The Scottish Crop Research Institute is a charitable company limited by guarantee. Registered in Scotland No: SC 29367. Recognised by the Inland Revenue as a Scottish Charity No: SC 006662. DISCLAIMER: This email is from the Scottish Crop Research Institute, but the views expressed by the sender are not necessarily the views of SCRI and its subsidiaries. This email and any files transmitted with it are confidential to the intended recipient at the e-mail address to which it has been addressed. It may not be disclosed or used by any other than that addressee. If you are not the intended recipient you are requested to preserve this confidentiality and you must not use, disclose, copy, print or rely on this e-mail in any way. Please notify postmaster at scri.ac.uk quoting the name of the sender and delete the email from your system. Although SCRI has taken reasonable precautions to ensure no viruses are present in this email, neither the Institute nor the sender accepts any responsibility for any viruses, and it is your responsibility to scan the email and the attachments (if any). -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20071026/8433b525/attachment-0002.html
Sorry please Ignore me, I have now done a mount -t lustre 143.234.96.46 at tcp0:/testfs /mnt/test and can write to the /mnt/test area. I thought I would be able to write to the OST area, apologies Iain ________________________________ From: lustre-discuss-bounces at clusterfs.com [mailto:lustre-discuss-bounces at clusterfs.com] On Behalf Of Iain Grant Sent: 26 October 2007 09:42 To: lustre-discuss at clusterfs.com Subject: [Lustre-discuss] Experienced New User needing help Sorry folks I''m still not getting any life from this. I''ve followed the manual and the following steps Module options for networking should first be set up in /etc/modprobe.conf, e.g. # Networking options, see /sys/module/lnet/parameters options lnet networks=tcp # end Lustre modules [edit <http://wiki.lustre.org/index.php?title=Mount_Conf&action=edit§ion=6> ]Making and starting a filesystem combo MDT/MGS on my single node'' mkfs.lustre --fsname=testfs --mdt --mgs /dev/sda1 mkdir -p /mnt/test/mdt mount -t lustre /dev/sda1 /mnt/test/mdt cat /proc/fs/lustre/devices 0 UP mgs MGS MGS 5 1 UP mgc MGC143.234.96.46 at tcp 303242f4-5aa3-5377-4895-90a397d56153 5 2 UP mdt MDS MDS_uuid 3 3 UP lov testfs-mdtlov testfs-mdtlov_UUID 4 4 UP mds testfs-MDT0000 testfs-MDT0000_UUID 3 Then I configured OST on the same node mkfs.lustre --fsname=testfs --ost --mgsnode=143.234.96.46 at tcp0 /dev/sda2 mkdir -p /mnt/test/ost0 mount -t lustre /dev/sda2 /mnt/test/ost0 when I cd to /mnt/test/ost0 I get Not a directory messages ??? If I do a df I can see the filesystems mounted. When I look at /var/log/messages I can see Oct 26 09:33:40 fraggle kernel: Lustre: Filtering OBD driver; info at clusterfs.com Oct 26 09:33:40 fraggle kernel: Lustre: testfs-OST0000: new disk, initializing Oct 26 09:33:41 fraggle kernel: Lustre: OST testfs-OST0000 now serving dev (testfs-OST0000/01493618-db27-ba73-4d41-6ab062fa5355) with recovery enabled Oct 26 09:33:41 fraggle kernel: Lustre: Server testfs-OST0000 on device /dev/sdb2 has started Oct 26 09:33:43 fraggle kernel: Lustre: testfs-OST0000: received MDS connection from 0 at lo Oct 26 09:33:43 fraggle kernel: Lustre: MDS testfs-MDT0000: testfs-OST0000_UUID now active, resetting orphans ???? This is driving me nuts ??? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ SCRI, Invergowrie, Dundee, DD2 5DA. The Scottish Crop Research Institute is a charitable company limited by guarantee. Registered in Scotland No: SC 29367. Recognised by the Inland Revenue as a Scottish Charity No: SC 006662. DISCLAIMER: This email is from the Scottish Crop Research Institute, but the views expressed by the sender are not necessarily the views of SCRI and its subsidiaries. This email and any files transmitted with it are confidential to the intended recipient at the e-mail address to which it has been addressed. It may not be disclosed or used by any other than that addressee. If you are not the intended recipient you are requested to preserve this confidentiality and you must not use, disclose, copy, print or rely on this e-mail in any way. Please notify postmaster at scri.ac.uk quoting the name of the sender and delete the email from your system. Although SCRI has taken reasonable precautions to ensure no viruses are present in this email, neither the Institute nor the sender accepts any responsibility for any viruses, and it is your responsibility to scan the email and the attachments (if any). -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20071026/ba1f0d0a/attachment-0002.html
No, you won?t be able to write to either the OST or MDS directories, or even examine them ... they?re just mount points to provide you with feedback about disk usage. I read in the docs that future releases may do something more useful with these mount points, but for now, that?s all they do. :-) cheers, Klaus On 10/26/07 1:47 AM, "Iain Grant" <Iain.Grant at scri.ac.uk>did etch on stone tablets:> Sorry please Ignore me, I have now done a > > mount -t lustre 143.234.96.46 at tcp0:/testfs /mnt/test > > and can write to the /mnt/test area. > I thought I would be able to write to the OST area, apologies > > Iain > > > > From: lustre-discuss-bounces at clusterfs.com > [mailto:lustre-discuss-bounces at clusterfs.com] On Behalf Of Iain Grant > Sent: 26 October 2007 09:42 > To: lustre-discuss at clusterfs.com > Subject: [Lustre-discuss] Experienced New User needing help > > Sorry folks I?m still not getting any life from this. > I?ve followed the manual and the following steps > > Module options for networking should first be set up in /etc/modprobe.conf, > e.g. > # Networking options, see /sys/module/lnet/parameters > options lnet networks=tcp > # end Lustre modules > [edit > <http://wiki.lustre.org/index.php?title=Mount_Conf&action=edit&section > =6> ] > Making and starting a filesystem > combo MDT/MGS on my single node'' > mkfs.lustre --fsname=testfs --mdt --mgs /dev/sda1 > mkdir -p /mnt/test/mdt > mount -t lustre /dev/sda1 /mnt/test/mdt > cat /proc/fs/lustre/devices > 0 UP mgs MGS MGS 5 > 1 UP mgc MGC143.234.96.46 at tcp 303242f4-5aa3-5377-4895-90a397d56153 5 > 2 UP mdt MDS MDS_uuid 3 > 3 UP lov testfs-mdtlov testfs-mdtlov_UUID 4 > 4 UP mds testfs-MDT0000 testfs-MDT0000_UUID 3 > Then I configured OST on the same node > mkfs.lustre --fsname=testfs --ost --mgsnode=143.234.96.46 at tcp0 /dev/sda2 > mkdir -p /mnt/test/ost0 > mount -t lustre /dev/sda2 /mnt/test/ost0 > > when I cd to /mnt/test/ost0 I get > > Not a directory messages ??? > > If I do a df I can see the filesystems mounted. > > When I look at /var/log/messages I can see > > > Oct 26 09:33:40 fraggle kernel: Lustre: Filtering OBD driver; > info at clusterfs.com > Oct 26 09:33:40 fraggle kernel: Lustre: testfs-OST0000: new disk, initializing > Oct 26 09:33:41 fraggle kernel: Lustre: OST testfs-OST0000 now serving dev > (testfs-OST0000/01493618-db27-ba73-4d41-6ab062fa5355) with recovery enabled > Oct 26 09:33:41 fraggle kernel: Lustre: Server testfs-OST0000 on device > /dev/sdb2 has started > Oct 26 09:33:43 fraggle kernel: Lustre: testfs-OST0000: received MDS > connection from 0 at lo > Oct 26 09:33:43 fraggle kernel: Lustre: MDS testfs-MDT0000: > testfs-OST0000_UUID now active, resetting orphans > > > ???? This is driving me nuts ??? > > > > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > SCRI, Invergowrie, Dundee, DD2 5DA. > The Scottish Crop Research Institute is a charitable company limited by > guarantee. > Registered in Scotland No: SC 29367. > Recognised by the Inland Revenue as a Scottish Charity No: SC 006662. > > > DISCLAIMER: > > This email is from the Scottish Crop Research Institute, but the views > expressed by the sender are not necessarily the views of SCRI and its > subsidiaries. This email and any files transmitted with it are confidential > to the intended recipient at the e-mail address to which it has been > addressed. It may not be disclosed or used by any other than that addressee. > If you are not the intended recipient you are requested to preserve this > confidentiality and you must not use, disclose, copy, print or rely on this > e-mail in any way. Please notify postmaster at scri.ac.uk quoting the > name of the sender and delete the email from your system. > > Although SCRI has taken reasonable precautions to ensure no viruses are > present in this email, neither the Institute nor the sender accepts any > responsibility for any viruses, and it is your responsibility to scan the > email > and the attachments (if any). > > > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss-------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20071026/bc89f36c/attachment-0002.html