This may be in the documentation. If so, I missed it. If a site has multiple Lustre file systems, the documentation implies that there only needs to be a single MGS for an entire site (regardless of the number of file systems). However, I also know it is fairly common to have a combined MGS/MDT. So here are the questions. 1. If we are going to have several Lustre file systems, is there any reason not to create each one with its own combined MGS/MDT? 2. Can an MGS which is a combined MGS/MDT for one file system serve as the MGS for one or more other file systems where the MGS and MDT are separate? 3. Is there any reason why doing several separate combined MGS/MDTs (one for each file system) won''t work? What are other sites w/ multiple Lustre file systems doing? A dedicated MGS seems like a costly option. Thanks, Charlie Taylor UF HPC Center
On Nov 18, 2007 20:48 -0500, Charles Taylor wrote:> If a site has multiple Lustre file systems, the documentation implies > that there only needs to be a single MGS for an entire site > (regardless of the number of file systems). However, I also know > it is fairly common to have a combined MGS/MDT. So here are the > questions. > > 1. If we are going to have several Lustre file systems, is there any > reason not to create each one with its own combined MGS/MDT?Not that I''m aware of, with the restriction that you can only have a single MGS running on a node at a time. If you have failover MDSes, then they need to share an MGS.> 2. Can an MGS which is a combined MGS/MDT for one file system serve > as the MGS for one or more other file systems where the MGS and MDT > are separate?It''s possible, but seems klunky... It means the combined MGS/MDT needs to be started to also start the other filesystems.> 3. Is there any reason why doing several separate combined MGS/MDTs > (one for each file system) won''t work?See above.> What are other sites w/ multiple Lustre file systems doing? A > dedicated MGS seems like a costly option.A dedicated MGS doesn''t mean a wholly separate node. It might just be a small partition shared among a few failover nodes. Cheers, Andreas -- Andreas Dilger Sr. Software Engineer, Lustre Group Sun Microsystems of Canada, Inc.
> > 1. If we are going to have several Lustre file systems, is there any > > reason not to create each one with its own combined MGS/MDT? > > Not that I''m aware of, with the restriction that you can only have a > single MGS running on a node at a time. If you have failover MDSes, > then they need to share an MGS. >May a client mount different lustre-filsystems from different MGS''s ? The manual (sec 2.1.1) says "There should be one MGS per site, not one MGS per file system." I thought this meant a single MGS if a client want''s to mount several lustre-filesystems. /Jakob
Jakob Goldbach wrote:>>> 1. If we are going to have several Lustre file systems, is there any >>> reason not to create each one with its own combined MGS/MDT? >>> >> Not that I''m aware of, with the restriction that you can only have a >> single MGS running on a node at a time. If you have failover MDSes, >> then they need to share an MGS. >> >> > > May a client mount different lustre-filsystems from different MGS''s ? > > The manual (sec 2.1.1) says "There should be one MGS per site, not one MGS > per file system." > > I thought this meant a single MGS if a client want''s to mount several > lustre-filesystems. >It''s not a requirement. The idea behind a single MGS per site is that way, you can mount multiple different Lustre FS''s without having to keep track of which servers they are on -- all you need to remember is the one mgsnid in "mount -t lustre mgsnid:/any_fs" The idea is one centralized place for config data / starting clients / monitoring (eventually). There will be one MGC on the client for each different MGS, so there''s slightly more resource usage if you do multiple MGS''s, but no big deal.
Is it possible to update the MGS information on the fly after the file system has been created? I''ve recently added another network to my Lustre configuration, and I can''t mount it from this new network (which didn''t exist when the file system was created). thanks, Klaus On 11/21/07 11:55 AM, "Nathan Rutman" <Nathan.Rutman at Sun.COM>did etch on stone tablets:> Jakob Goldbach wrote: >>>> 1. If we are going to have several Lustre file systems, is there any >>>> reason not to create each one with its own combined MGS/MDT? >>>> >>> Not that I''m aware of, with the restriction that you can only have a >>> single MGS running on a node at a time. If you have failover MDSes, >>> then they need to share an MGS. >>> >>> >> >> May a client mount different lustre-filsystems from different MGS''s ? >> >> The manual (sec 2.1.1) says "There should be one MGS per site, not one MGS >> per file system." >> >> I thought this meant a single MGS if a client want''s to mount several >> lustre-filesystems. >> > It''s not a requirement. The idea behind a single MGS per site is that > way, you can mount > multiple different Lustre FS''s without having to keep track of which > servers they are on > -- all you need to remember is the one mgsnid in "mount -t lustre > mgsnid:/any_fs" > The idea is one centralized place for config data / starting clients / > monitoring (eventually). > > There will be one MGC on the client for each different MGS, so there''s > slightly more resource > usage if you do multiple MGS''s, but no big deal. > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
Klaus Steden wrote:> Is it possible to update the MGS information on the fly after the file > system has been created? > > I''ve recently added another network to my Lustre configuration, and I can''t > mount it from this new network (which didn''t exist when the file system was > created). > >No. You need to modify modprobe.conf networks, so that the nodes see all their new nids, and then run tunefs.lustre --writeconf on the MDTs> thanks, > Klaus > > On 11/21/07 11:55 AM, "Nathan Rutman" <Nathan.Rutman at Sun.COM>did etch on > stone tablets: > > >> Jakob Goldbach wrote: >> >>>>> 1. If we are going to have several Lustre file systems, is there any >>>>> reason not to create each one with its own combined MGS/MDT? >>>>> >>>>> >>>> Not that I''m aware of, with the restriction that you can only have a >>>> single MGS running on a node at a time. If you have failover MDSes, >>>> then they need to share an MGS. >>>> >>>> >>>> >>> May a client mount different lustre-filsystems from different MGS''s ? >>> >>> The manual (sec 2.1.1) says "There should be one MGS per site, not one MGS >>> per file system." >>> >>> I thought this meant a single MGS if a client want''s to mount several >>> lustre-filesystems. >>> >>> >> It''s not a requirement. The idea behind a single MGS per site is that >> way, you can mount >> multiple different Lustre FS''s without having to keep track of which >> servers they are on >> -- all you need to remember is the one mgsnid in "mount -t lustre >> mgsnid:/any_fs" >> The idea is one centralized place for config data / starting clients / >> monitoring (eventually). >> >> There will be one MGC on the client for each different MGS, so there''s >> slightly more resource >> usage if you do multiple MGS''s, but no big deal. >> >> _______________________________________________ >> Lustre-discuss mailing list >> Lustre-discuss at clusterfs.com >> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >> > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >
On 11/21/07 12:05 PM, "Nathan Rutman" <Nathan.Rutman at Sun.COM>did etch on stone tablets:> Klaus Steden wrote: >> Is it possible to update the MGS information on the fly after the file >> system has been created? >> >> I''ve recently added another network to my Lustre configuration, and I can''t >> mount it from this new network (which didn''t exist when the file system was >> created). >> >> > No. You need to modify modprobe.conf networks, so that the nodes see > all their new nids, > and then run tunefs.lustre --writeconf on the MDTsAha, light goes on. Is this explained more fully in the Lustre docs? I would like to update my file system without damaging it. (I know, I know, I''m demanding like that ... ;-) Klaus