The Lustre documentation team is now in the planning stage to develop a formal Quick Start Guide for Lustre. The scope, topical coverage, and layout of the Guide are being considered. We invite the Lustre community to suggest specific quick start topics that you would like to see included in this new guide. Also, if there are quick start guides for other products that you find particularly useful and well done, please post their links to this list. Thanks for your help - Lustre docs team
> [ ... ] We invite the Lustre community to suggest specific > quick start topics that you would like to see included in this > new guide. [ ... ]I have been impressed with the number of interesting presentations on Lustre from the large science facility community, and among there I have spotted a nice quick start guide here: http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 This is not quite a HOWTO but has some interesting suggestions for HA and backup (even if I think that the only sensible way to backup a large Lustre storage pool is another Lustre storage pool): http://indico.cern.ch/contributionDisplay.py?contribId=24&sessionId=12&confId=27391
> > This is not quite a HOWTO but has some interesting suggestions > for HA and backup (even if I think that the only sensible way to > backup a large Lustre storage pool is another Lustre storage > pool): > > http://indico.cern.ch/contributionDisplay.py? > contribId=24&sessionId=12&confId=27391Interesting they are using DRBD, We thought about this, and there is a request about it in bugzilla, but nothing appears to have been done about it. I have used it before for NFS and LVM with Xen virtual machines without issue. We also asked sun about using an iSCSI array for the shared storage for failover with the MDT/MGS. We were told it had not been tested and to use FC inplace. Some what disapointed, FC more than doubled the cost of the MDT/MGS setup when you put in the FC adapters and the cabinet. We wanted to be safe though. On you replacing the thumper sata_mv driver with the one from sun, i hope this fixes the lacking performance..... Brock Palen www.umich.edu/~brockp Center for Advanced Computing brockp at umich.edu (734)936-1985> _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-discuss > >
Hi Brock, I''m experimenting with a Dell iSCSI array in one of our labs here ... so far, it behaves pretty typically for a Lustre, although the performance isn''t blazing -- but that''s due to limitations of the network infrastructure. I didn''t notice anything in the iSCSI gear that would indicate that it couldn''t do failover ... on my OSS nodes, I''m using disk labels rather than device IDs, and the OSTs are interchangeable on both OSSes. Just fyi, I don''t know if there''s value in it. I hadn''t planned on testing failover with this config as it was mostly proof-of-concept of iSCSI Lustre for management, but I could make it happen at some point. Klaus On 5/23/08 8:53 AM, "Brock Palen" <brockp at umich.edu>did etch on stone tablets:> >> >> This is not quite a HOWTO but has some interesting suggestions >> for HA and backup (even if I think that the only sensible way to >> backup a large Lustre storage pool is another Lustre storage >> pool): >> >> http://indico.cern.ch/contributionDisplay.py? >> contribId=24&sessionId=12&confId=27391 > > Interesting they are using DRBD, We thought about this, and there is > a request about it in bugzilla, but nothing appears to have been done > about it. I have used it before for NFS and LVM with Xen virtual > machines without issue. > > We also asked sun about using an iSCSI array for the shared storage > for failover with the MDT/MGS. We were told it had not been tested > and to use FC inplace. Some what disapointed, FC more than doubled > the cost of the MDT/MGS setup when you put in the FC adapters and the > cabinet. We wanted to be safe though. > > On you replacing the thumper sata_mv driver with the one from sun, i > hope this fixes the lacking performance..... > > Brock Palen > www.umich.edu/~brockp > Center for Advanced Computing > brockp at umich.edu > (734)936-1985 > >> _______________________________________________ >> Lustre-discuss mailing list >> Lustre-discuss at lists.lustre.org >> http://lists.lustre.org/mailman/listinfo/lustre-discuss >> >> > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-discuss
On May 23, 2008, at 5:08 PM, Klaus Steden wrote:> > Hi Brock, > > I''m experimenting with a Dell iSCSI array in one of our labs > here ... so > far, it behaves pretty typically for a Lustre, although the > performance > isn''t blazing -- but that''s due to limitations of the network > infrastructure. > > I didn''t notice anything in the iSCSI gear that would indicate that it > couldn''t do failover ... on my OSS nodes, I''m using disk labels > rather than > device IDs, and the OSTs are interchangeable on both OSSes. > > Just fyi, I don''t know if there''s value in it. I hadn''t planned on > testing > failover with this config as it was mostly proof-of-concept of > iSCSI Lustre > for management, but I could make it happen at some point.While it is to late for us now to know that this will work, I also knew of no reason why it should not work. Thanks for going down that path it will be good to have options other than FC which not all sites like to add/deal with in their networks. Thanks> > Klaus > > On 5/23/08 8:53 AM, "Brock Palen" <brockp at umich.edu>did etch on stone > tablets: > >> >>> >>> This is not quite a HOWTO but has some interesting suggestions >>> for HA and backup (even if I think that the only sensible way to >>> backup a large Lustre storage pool is another Lustre storage >>> pool): >>> >>> http://indico.cern.ch/contributionDisplay.py? >>> contribId=24&sessionId=12&confId=27391 >> >> Interesting they are using DRBD, We thought about this, and there is >> a request about it in bugzilla, but nothing appears to have been done >> about it. I have used it before for NFS and LVM with Xen virtual >> machines without issue. >> >> We also asked sun about using an iSCSI array for the shared storage >> for failover with the MDT/MGS. We were told it had not been tested >> and to use FC inplace. Some what disapointed, FC more than doubled >> the cost of the MDT/MGS setup when you put in the FC adapters and the >> cabinet. We wanted to be safe though. >> >> On you replacing the thumper sata_mv driver with the one from sun, i >> hope this fixes the lacking performance..... >> >> Brock Palen >> www.umich.edu/~brockp >> Center for Advanced Computing >> brockp at umich.edu >> (734)936-1985 >> >>> _______________________________________________ >>> Lustre-discuss mailing list >>> Lustre-discuss at lists.lustre.org >>> http://lists.lustre.org/mailman/listinfo/lustre-discuss >>> >>> >> >> _______________________________________________ >> Lustre-discuss mailing list >> Lustre-discuss at lists.lustre.org >> http://lists.lustre.org/mailman/listinfo/lustre-discuss > > >