I'm currently working with a testing system that involves running mkfs.ext3 on some pretty large devices on a regular basis. This is getting fairly painful, and I was wondering if there was some way to speed this up. Understood that the end result might be a filesystem that has less of a safety factor (say, fewer superblock backups) but the tradeoff might be worth it in this case. David
If you don't need that many inodes, using -T largefile or largefile4, or specify a smaller number of inodes will speed things up quite a bit. ... ling David Shaw wrote:> I'm currently working with a testing system that involves running > mkfs.ext3 on some pretty large devices on a regular basis. This is > getting fairly painful, and I was wondering if there was some way to > speed this up. Understood that the end result might be a filesystem > that has less of a safety factor (say, fewer superblock backups) but > the tradeoff might be worth it in this case. > > David > > _______________________________________________ > Ext3-users mailing list > Ext3-users at redhat.com > https://www.redhat.com/mailman/listinfo/ext3-users >
On May 02, 2007 14:16 -0400, David Shaw wrote:> I'm currently working with a testing system that involves running > mkfs.ext3 on some pretty large devices on a regular basis. This is > getting fairly painful, and I was wondering if there was some way to > speed this up. Understood that the end result might be a filesystem > that has less of a safety factor (say, fewer superblock backups) but > the tradeoff might be worth it in this case.Do you actually need to use the whole filesystem afterward? Is the performance of the filesystem a critical feature? If not, then you can use "mke2fs -J size=4 -O lazy_bg ..." to make a smaller journal (to avoid zeroing it out) and not zero out the inode tables. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc.