Nathaniel Rutman
2006-Apr-21 12:37 UTC
[Lustre-devel] Re: [Lustre-announce] Lustre 1.6.0 beta2 is now available
Peter Bojanic wrote:>>> >>> Partners, customers, and evaluators can download this release from: >>> http://www.clusterfs.com/download-beta.html >>> >>> Please see the Lustre wiki for more information about this release: >>> https://mail.clusterfs.com/wikis/lustre/MountConf >> >> Peter, >> Is there a specific branch name or tag I can use to pull 1.6 >> sources >> from cvs? Or should I use the tarball from the wiki page; >> lustre1.6b0.tgz? > > > I don''t think 1.6.0 is in the public wiki yet. Nathan can advise/ > clarify. If not, we should do something about this soon. Nathan... > please follow up with sysad to make this happen. >You can download it from either of the above two URLs. Tim Cullen downloaded the tarfile from the wiki page and has compiled it; don''t know if he''s played with it yet. You can also pull b1_4_mountconf, but we''re likely going to be chaning branch numbers soon, so this might get stale. Eventually, it will all go into b1_6.>> Would like to start experimenting with 1.6 & mountconf soon. I like >> the info available at https://mail.clusterfs.com/wikis/lustre/ >> MountConf; >> short, sweet, lots of examples, just enough to get started with. > > > Yes, we started this as an internal resource that has evolved over > time. We were sure to publish it for the public once we made the beta > available. Please direct feedback to the lustre-devel list, so we can > continue to improve it. > >> One thing that struck me about mountconf. While true it does make >> creating & managing lustre fs look a lot more like other >> filesystems, it >> loses the one virtue that lconf had. That virtue was a single, common, >> uniform file & command to do server operations like start & stop, >> nothing needed to be especially customized per server. This property >> was quite useful on xt3 service nodes, where there is a common, >> read-only, & shared root fs across all servers. >> >> Looks like with mountconf there needs to be customized scripts or sets >> of cmds to start up & shut down services on each server, with none the >> same as others. > > > This is an interesting observation, and one that I''ve heard from some > other customers as well. Perhaps Nathan can offer some insights on > how to take care of some of these use case you mentioned. Probably > best to drag this conversation out into the lustre-devel foreground, > so other Lustre users can learn and benefit from the answers. >In fact, we are working on scripts to manage full-cluster installations (under utils/cluster_scripts). They are not up to snuff yet. These are currently designed to format an entire cluster, not manage it, but it would be easy to add a "start/stop the entire cluster" script. The scripts are intended to manage HA configuration as well. Not yet there is a "collect cluster info" script (bz 9863) that will consolidate info about the entire cluster config into a single file. Alternately, the commands to start up services on each server could simply be entries in the local /etc/fstab files. Since disks are now labelleled with their services, fstab entries can become very clear and robust: LABEL=lustre-MDT0000 /mnt/mdt lustre defaults 0 0 will start that service as long as the disk is there. You don''t have to worry about which /dev/ it maps to, scsi reordering issues, getting confused about which disks are on which nodes or which services are on which disks. If you really want to, you could in fact use this feature to write a common startup script for all your server nodes, that just tries to mount every label everywhere. (Of course, don''t do this with shared failover disks). In any case, please do play around with this and let us know your thoughts on what works well and what doesn''t. Nathan
Andreas Dilger
2006-Apr-23 01:36 UTC
[Lustre-devel] Re: [Lustre-announce] Lustre 1.6.0 beta2 is now available
On Apr 21, 2006 11:37 -0700, Nathaniel Rutman wrote:>Bob Glossman? wrote: > >>Looks like with mountconf there needs to be customized scripts or sets > >>of cmds to start up & shut down services on each server, with none the > >>same as others. > > In fact, we are working on scripts to manage full-cluster installations > (under utils/cluster_scripts). They are not up to snuff yet. These are > currently designed to format an entire cluster, not manage it, but it > would be easy to add a "start/stop the entire cluster" script. The > scripts are intended to manage HA configuration as well. Not yet there > is a "collect cluster info" script (bz 9863) that will consolidate info > about the entire cluster config into a single file. > > Alternately, the commands to start up services on each server could > simply be entries in the local /etc/fstab files. > Since disks are now labelleled with their services, fstab entries can > become very clear and robust: > LABEL=lustre-MDT0000 /mnt/mdt lustre defaults 0 0 > will start that service as long as the disk is there. You don''t have to > worry about which /dev/ it maps to, scsi reordering issues, getting > confused about which disks are on which nodes or which services are on > which disks.Something like "pdsh -w {server nodes} mount -t lustre" would work, if /etc/fstab was set up to have only the primary service filesystems in /etc/fstab. They could have the "noauto" mount option also, so they aren''t started on every boot. Alternately, if you are using HA software to control server failover, mounting a filesystem is a pretty standard feature of these systems and they should be left to their own devices. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc.
Any idea when there will be a stable release for this? -- Daniel Shearer On Tue, 18 Apr 2006, Peter Bojanic wrote:> NOTE: NOT FOR PRODUCTION USE > > Cluster File Systems is pleased to announce an early beta version of > Lustre 1.6, featuring MountConf. This new configuration and > management system, which debuts in Lustre 1.6, extends the the > zeroconf concept from clients to servers, so that mounting a server > starts it and unmounting stops it. Server configuration is specified > when you format the storage, so lconf and lmc have been retired, and > new storage can be added dynamically to a live cluster -- just format > a new device and mount it! > > Partners, customers, and evaluators can download this release from: > http://www.clusterfs.com/download-beta.html > > Please see the Lustre wiki for more information about this release: > https://mail.clusterfs.com/wikis/lustre/MountConf > > Thank you for your assistance; as always, you can report issues via > Bugzilla (https://bugzilla.clusterfs.com/) or email > (support@clusterfs.com). > > -- The Lustre Team -- > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >
Hi Daniel, On 2006-04-21, at 12:02 , Daniel Shearer wrote:> Any idea when there will be a stable release for this?I think you''ll see at least one more beta from CFS before a production release. Our plan is for this to be production ready by this June, but it will depend a lot on the range of testing we''ve been able to accomplish in that timeframe. Any help with testing, and feedback reported back to here or lustre- devel, would be greatly appreciated. Cheers, Peter> > -- > Daniel Shearer > > On Tue, 18 Apr 2006, Peter Bojanic wrote: > >> NOTE: NOT FOR PRODUCTION USE >> >> Cluster File Systems is pleased to announce an early beta version of >> Lustre 1.6, featuring MountConf. This new configuration and >> management system, which debuts in Lustre 1.6, extends the the >> zeroconf concept from clients to servers, so that mounting a server >> starts it and unmounting stops it. Server configuration is specified >> when you format the storage, so lconf and lmc have been retired, and >> new storage can be added dynamically to a live cluster -- just format >> a new device and mount it! >> >> Partners, customers, and evaluators can download this release from: >> http://www.clusterfs.com/download-beta.html >> >> Please see the Lustre wiki for more information about this release: >> https://mail.clusterfs.com/wikis/lustre/MountConf >> >> Thank you for your assistance; as always, you can report issues via >> Bugzilla (https://bugzilla.clusterfs.com/) or email >> (support@clusterfs.com). >> >> -- The Lustre Team -- >> >> _______________________________________________ >> Lustre-discuss mailing list >> Lustre-discuss@clusterfs.com >> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >>
Roland Fehrenbacher
2006-May-19 07:36 UTC
[Lustre-discuss] Lustre 1.6.0 beta2 is now available
>>>>> "Peter" == Peter Bojanic <pbojanic@clusterfs.com> writes:Hi Peter, >> Any idea when there will be a stable release for this? Peter> I think you''ll see at least one more beta from CFS before a Peter> production release. Our plan is for this to be production Peter> ready by this June, but it will depend a lot on the range Peter> of testing we''ve been able to accomplish in that timeframe. will the initial release contain support for the openfabric Infiniband stack (driver in kernel.org), as described on the roadmap? Cheers, Roland Peter> Any help with testing, and feedback reported back to here Peter> or lustre- devel, would be greatly appreciated. Peter> Cheers, Peter >> >> -- >> Daniel Shearer >> >> On Tue, 18 Apr 2006, Peter Bojanic wrote: >> >>> NOTE: NOT FOR PRODUCTION USE >>> >>> Cluster File Systems is pleased to announce an early beta >>> version of Lustre 1.6, featuring MountConf. This new >>> configuration and management system, which debuts in Lustre >>> 1.6, extends the the zeroconf concept from clients to servers, >>> so that mounting a server starts it and unmounting stops >>> it. Server configuration is specified when you format the >>> storage, so lconf and lmc have been retired, and new storage >>> can be added dynamically to a live cluster -- just format a >>> new device and mount it! >>> >>> Partners, customers, and evaluators can download this release >>> from: http://www.clusterfs.com/download-beta.html >>> >>> Please see the Lustre wiki for more information about this >>> release: https://mail.clusterfs.com/wikis/lustre/MountConf >>> >>> Thank you for your assistance; as always, you can report >>> issues via Bugzilla (https://bugzilla.clusterfs.com/) or email >>> (support@clusterfs.com). >>> >>> -- The Lustre Team -- >>> >>> _______________________________________________ Lustre-discuss >>> mailing list Lustre-discuss@clusterfs.com >>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >>> Peter> _______________________________________________ Peter> Lustre-discuss mailing list Lustre-discuss@clusterfs.com Peter> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
Hi Roland, On 2006-04-21, at 18:51 , Roland Fehrenbacher wrote:>>>>>> "Peter" == Peter Bojanic <pbojanic@clusterfs.com> writes: > > Hi Peter, > >>> Any idea when there will be a stable release for this? > > Peter> I think you''ll see at least one more beta from CFS before a > Peter> production release. Our plan is for this to be production > Peter> ready by this June, but it will depend a lot on the range > Peter> of testing we''ve been able to accomplish in that timeframe. > > will the initial release contain support for the openfabric Infiniband > stack (driver in kernel.org), as described on the roadmap?We plan support for OpenFabrics to be included in Lustre 1.4.8 and Lustre 1.6.0 -- both in the June timeframe. Peter
Peter Bojanic
2006-May-19 07:36 UTC
[Lustre-discuss] Re: [Lustre-announce] Lustre 1.6.0 beta2 is now available
XXXXX
NOTE: NOT FOR PRODUCTION USE Cluster File Systems is pleased to announce an early beta version of Lustre 1.6, featuring MountConf. This new configuration and management system, which debuts in Lustre 1.6, extends the the zeroconf concept from clients to servers, so that mounting a server starts it and unmounting stops it. Server configuration is specified when you format the storage, so lconf and lmc have been retired, and new storage can be added dynamically to a live cluster -- just format a new device and mount it! Partners, customers, and evaluators can download this release from: http://www.clusterfs.com/download-beta.html Please see the Lustre wiki for more information about this release: https://mail.clusterfs.com/wikis/lustre/MountConf Thank you for your assistance; as always, you can report issues via Bugzilla (https://bugzilla.clusterfs.com/) or email (support@clusterfs.com). -- The Lustre Team --
Hi This patchless feature will be integrated (outegrated would be better) for the next beta. There are a few other features that are missing from 1.6.0 beta2, most notably the automatic arragement of stripe indexes and space management. - Peter -> -----Original Message----- > From: lustre-discuss-bounces@clusterfs.com > [mailto:lustre-discuss-bounces@clusterfs.com] On Behalf Of EKC > Sent: Wednesday, April 19, 2006 10:45 AM > To: lustre-discuss@clusterfs.com > Subject: Re: [Lustre-discuss] Lustre 1.6.0 beta2 is now available > > Does Lustre 1.6.0 beta2 (with MountConf) support a patchless client? > I''m currently running lustre-1.4.6-patchless > (ftp://ftp.clusterfs.com/pub/people/green/patchless) and it''s > working great so far. Are there any plans to add patchless > client support to Lustre 1.6.0? > > Thanks > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > >
Does Lustre 1.6.0 beta2 (with MountConf) support a patchless client? I''m currently running lustre-1.4.6-patchless (ftp://ftp.clusterfs.com/pub/people/green/patchless) and it''s working great so far. Are there any plans to add patchless client support to Lustre 1.6.0? Thanks
Nathaniel Rutman
2006-May-23 20:21 UTC
[Lustre-announce] Lustre 1.6.0 beta3 is now available
NOTE: NOT FOR PRODUCTION USE Cluster File Systems is pleased to announce the next beta version of Lustre 1.6, featuring MountConf. This new configuration and management system, which debuts in Lustre 1.6, extends the the zeroconf concept from clients to servers, so that mounting a server starts it and unmounting stops it. Server configuration is specified when you format the storage, so lconf and lmc have been retired, and new storage can be added dynamically to a live cluster -- just format a new device and mount it! The new beta3 code (aka v1_5_90) now includes "optimized" stripe allocation, to try to balance space available and network resources. It also includes improved interoperability with installed (old) Lustre filesystems. Partners, customers, and evaluators can download this release from: http://www.clusterfs.com/download-beta.html Please see the Lustre wiki for more information about this release: https://mail.clusterfs.com/wikis/lustre/MountConf Thank you for your assistance; as always, you can report issues via Bugzilla (https://bugzilla.clusterfs.com/) or email (support@clusterfs.com). -- The Lustre Team --