Hi, General question: I know that a Lustre filesystem may have only one MGS running in-general and only one MDS for a specific lustre filesystem (for example, /usrdisk and /admin1). Is there a way in which a Lustre filesystem may have one MGS and one MDS which runs on another machine in the network? I have an MGS/MDS combo running right now. I wish to add more disk space to our Lustre clients, in this case as scratch space (not backed up and cleaned out regularly). I have no disks available in the current MGS/MDS on which to create an MDT for the new /scr1 disk. Can I point the MGS to use a disk physically on another computer as the MDT? Lower priority question: Is lustre a good tool for such a user scratch disk system? I selected lustre because it''s fast and mounts everywhere in our group. If yes, may I combine, say one scratch disk from computerA as an ost with fsname myscr and one scratch disk from computerB as an ost with fsname myscr and have them used by the myscr MDT (the OST''s will not be on the same OSS)? Enjoy your day, megan
On Tue, 2008-09-23 at 16:16 -0400, Ms. Megan Larko wrote:> Hi,Hi,> General question: I know that a Lustre filesystem may have only one > MGS running in-general and only one MDS for a specific lustre > filesystem (for example, /usrdisk and /admin1).Correct.> Is there a way in > which a Lustre filesystem may have one MGS and one MDS which runs on > another machine in the network?Sure. There is no requirement that the MGS and MDT be in the same machine.> I have an MGS/MDS combo running right now.Ugh. You really should [have] separate[d] them. We didn''t really intend combo mgs/mdts for more than the most simple and/or "toy" deployments.> I wish to add more disk > space to our Lustre clients,More space to an existing filesystem or a new filesystem?> I have no disks available in the > current MGS/MDS on which to create an MDT for the new /scr1 disk. Can > I point the MGS to use a disk physically on another computer as the > MDT?Sure.> Lower priority question: Is lustre a good tool for such a user > scratch disk system?If the use case/needs of your scratch filesystem meet Lustre''s abilities, sure. In general Lustre shines in cases of large files that need lots of I/O bandwidth. It''s not so good at the "small file" scenario though. It''s not terrible, just not great. b. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://lists.lustre.org/pipermail/lustre-discuss/attachments/20080923/7686867b/attachment-0001.bin
Thank you for the quick reply Brian. Comments and clarifications in-line. On Tue, 2008-09-23 at 16:16 -0400, Ms. Megan Larko wrote:>> General question: I know that a Lustre filesystem may have only one >> MGS running in-general and only one MDS for a specific lustre >> filesystem (for example, /usrdisk and /admin1).>Correct.>> Is there a way in >> which a Lustre filesystem may have one MGS and one MDS which runs on >> another machine in the network?>Sure. There is no requirement that the MGS and MDT be in the same >machine.>> I have an MGS/MDS combo running right now.>Ugh. You really should [have] separate[d] them. We didn''t really >intend combo mgs/mdts for more than the most simple and/or "toy" >deployments.Ok. I am building a new MDS that is right now successfully on-line using lustre-1.6.5.1 (Note: I had to use a different rpm installation order than that recommended in the Lustre Manual page 3-4 for ver. 1.6.4 BTW. I had to put the kernel-ib rpm in second after the kernel-lustre-smp rpm, then the lustre-ldiskfs, the lustre-modules and finally the lustre rpm.) So I could grab some hw (the requirements for an MGS-only box are low) and create an MGS-only using the lustre-1.6.5.1. Then I would have to migrate my existing systems to the new set-up. I have successfully done file level lustre backup and restores. Would this "upgrade" approach be better (and just put my users scratch space on hold for a couple of weeks? ----- small snip----->b.megan