Hi: In out project, we want to use large ost partition, but in our test, ext3 is not stable for large partion(7TB), but ext2 was ok, so we want use ext2 as ost''s fs, can we? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20070618/1c90a36a/attachment.html
On Jun 19, 2007 13:57 +0800, swin wang wrote:> In out project, we want to use large ost partition, but in our test, > ext3 is not stable for large partion(7TB), but ext2 was ok.We haven''t had any reports of similar problems. There are many production systems with 4TB OSTs that do not have problems. There are no filesystem limitations I''m aware of between 4TB and 7TB that should cause problems. At 8TB there are known issues with signedness of 32-bit values that have not yet been fixed in the vendor kernels we use.> so we want use ext2 as ost''s fs, can we?No, this is not possible. Every crash of OSS node would take 3-5h for e2fsck to run, and multi-file updates done by Lustre would be inconsistent because journal is not there to ensure atomic updates in this case. There is also no mballoc+extents patch for ext2, so performance would be worse than for ext3. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc.
we have not test 7TB ost partition with lustre, but with ext3, it''s not stable, sometimes the fs turn to readonly during writing, sometimes fsck error and then some inode is cleared. 2007/6/19, Andreas Dilger <adilger@clusterfs.com>:> > On Jun 19, 2007 13:57 +0800, swin wang wrote: > > In out project, we want to use large ost partition, but in our test, > > ext3 is not stable for large partion(7TB), but ext2 was ok. > > We haven''t had any reports of similar problems. There are many production > systems with 4TB OSTs that do not have problems. There are no filesystem > limitations I''m aware of between 4TB and 7TB that should cause problems. > At 8TB there are known issues with signedness of 32-bit values that have > not yet been fixed in the vendor kernels we use. > > > so we want use ext2 as ost''s fs, can we? > > No, this is not possible. Every crash of OSS node would take 3-5h for > e2fsck to run, and multi-file updates done by Lustre would be inconsistent > because journal is not there to ensure atomic updates in this case. There > is also no mballoc+extents patch for ext2, so performance would be worse > than for ext3. > > Cheers, Andreas > -- > Andreas Dilger > Principal Software Engineer > Cluster File Systems, Inc. > >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20070619/94c74fbb/attachment.html
On Tuesday 19 June 2007, swin wang wrote:> we have not test 7TB ost partition with lustre, but with ext3, > it''s not stable, sometimes the fs turn to readonly during writing, > sometimes fsck error and then some inode is cleared.Could you tell us which distribution, kernel and arch this is on? Also, what do you run on it to produce these errors? FWIW, we are running multiple ext3 (and ldiskfs) systems >2T with no (noticed) problems (this on centos-4/5, dist kernel and on x86_64). /Peter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available Url : http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20070619/7f9447d6/attachment.bin
kernel is 2.6.9-34.ELsmp?x86_64 redhat RHEL v4 we just want to test if ext3 is stable on large partition, so just write a shell copy a 1.9GB file many times, until error or the partition is nearly full, and do fsck. 2007/6/19, Peter Kjellstrom <cap@nsc.liu.se>:> > On Tuesday 19 June 2007, swin wang wrote: > > we have not test 7TB ost partition with lustre, but with ext3, > > it''s not stable, sometimes the fs turn to readonly during writing, > > sometimes fsck error and then some inode is cleared. > > Could you tell us which distribution, kernel and arch this is on? Also, > what > do you run on it to produce these errors? > > FWIW, we are running multiple ext3 (and ldiskfs) systems >2T with no > (noticed) > problems (this on centos-4/5, dist kernel and on x86_64). > > /Peter > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20070619/0bc28b16/attachment.html
Hi, On Tuesday 19 June 2007 08:13:17 am swin wang wrote:> kernel is 2.6.9-34.ELsmp?x86_64 > redhat RHEL v4 > > we just want to test if ext3 is stable on large partition, > so just write a shell copy a 1.9GB file many times, > until error or the partition is nearly full, and do fsck.And can you give some details about your hardware configuration? Cheers, -- Kilian
hardware raid6 7.2TB partition 4GB memory x86_64 Intel(R) Pentium(R) D CPU 3.00GHz, x 2 need any more details? 2007/6/20, Kilian CAVALOTTI < kilian@stanford.edu>:> > Hi, > > On Tuesday 19 June 2007 08:13:17 am swin wang wrote: > > kernel is 2.6.9-34.ELsmp?x86_64 > > redhat RHEL v4 > > > > we just want to test if ext3 is stable on large partition, > > so just write a shell copy a 1.9GB file many times, > > until error or the partition is nearly full, and do fsck. > > And can you give some details about your hardware configuration? > > Cheers, > -- > Kilian >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20070619/44e3cfe4/attachment.html
On Tuesday 19 June 2007 19:31:31 you wrote:> hardware raid6 7.2TB partition > 4GB memory > x86_64 Intel(R) Pentium(R) D CPU 3.00GHz, x 2 > need any more details?Is your RAID hardware of software? SCSI or SAS? What controller do you use? I''m asking because we had this kind of issue with Perc 4e/DC cards -- Kilian
12 x SATA750GB RAID controller: HP smart array p400 2007/6/20, Kilian CAVALOTTI <kilian@stanford.edu>:> > On Tuesday 19 June 2007 19:31:31 you wrote: > > hardware raid6 7.2TB partition > > 4GB memory > > x86_64 Intel(R) Pentium(R) D CPU 3.00GHz, x 2 > > need any more details? > > Is your RAID hardware of software? SCSI or SAS? What controller do you > use? > I''m asking because we had this kind of issue with Perc 4e/DC cards > > -- > Kilian >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20070619/8b993a90/attachment.html