Guolin Cheng
2004-Mar-03 23:01 UTC
heavily fragmented file system.. How to defrag it on-line??
Hi, all, I got machines running continuously for long time, but then the underlying ext3 file systems becomes quite heavily fragmented (94% non-contiguous). We just don't have a chance to shutdown the machines since they are always busy.. I tried the defrag 0.70 version comes with e2fsprog package and standalone 0.73 packages, but neither help me since the defrag tool can not handle ext3. A thrid-party commercial tool oodcmd doesn't help as well since it can only deal with idle unmounted file systems, neither can it guarantee data integrity. For that case, I mean, when the machine is booted into repair mode and file system is not in use, We can use gtar to save|restore data without data loss, so the commercial tool can not do almost no help for us. Anyone have any ideas on defraging ext3 file systems on-line? Thanks a lot. ----------------------------------------------------------------------------------------------------------------------------------------------------- The following is the defragment reported by e2fsck.. arc158.example.com guolin 135% sudo e2fsck -f -n -d /dev/hda9 e2fsck 1.27 (8-Mar-2002) Warning! /dev/hda9 is mounted. Warning: skipping journal recovery because doing a read-only filesystem check. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /0: 1225/8601600 files (94.3% non-contiguous), 12724107/17181982 blocks --Guolin Cheng
Andreas Dilger
2004-Mar-04 02:12 UTC
heavily fragmented file system.. How to defrag it on-line??
On Mar 03, 2004 15:01 -0800, Guolin Cheng wrote:> I got machines running continuously for long time, but then the underlying > ext3 file systems becomes quite heavily fragmented (94% non-contiguous). > We just don't have a chance to shutdown the machines since they are always busy.. > Anyone have any ideas on defraging ext3 file systems on-line? Thanks a lot.Why do you think you need to defragment? Do you notice performance loss, or is it just because of the e2fsck number? Given that you have only 1225 files and 50GB of space usage it is almost guaranteed that each file will not be contiguous.> ----------------------------------------------------------------------------- > The following is the defragment reported by e2fsck.. > > arc158.example.com guolin 135% sudo e2fsck -f -n -d /dev/hda9 > e2fsck 1.27 (8-Mar-2002) > Warning! /dev/hda9 is mounted. > Warning: skipping journal recovery because doing a read-only filesystem check. > Pass 1: Checking inodes, blocks, and sizes > Pass 2: Checking directory structure > Pass 3: Checking directory connectivity > Pass 4: Checking reference counts > Pass 5: Checking group summary information > /0: 1225/8601600 files (94.3% non-contiguous), 12724107/17181982 blocksCheers, Andreas -- Andreas Dilger http://sourceforge.net/projects/ext2resize/ http://www-mddsp.enel.ucalgary.ca/People/adilger/
Theodore Ts'o
2004-Mar-04 02:17 UTC
heavily fragmented file system.. How to defrag it on-line??
On Wed, Mar 03, 2004 at 03:01:35PM -0800, Guolin Cheng wrote:> I got machines running continuously for long time, but then the underlying ext3 file systems becomes quite heavily fragmented (94% non-contiguous).Note that non-contiguous does not necessarily mean fragmented. Files that are larger than a block group will be non-contiguous by definition. On the other hand if you have more than one file simultaneously being written to in a directory, then yes the files will certainly get fragmented. Are you a sufficient read-performance degredation? If not, it may not be worth bothering to defrag the filesystem.> Anyone have any ideas on defraging ext3 file systems on-line? Thanks a lot.There rae no on-line defrag tools right now, sorry. - Ted
Guolin Cheng
2004-Mar-04 02:27 UTC
heavily fragmented file system.. How to defrag it on-line??
Hi, Ted, Thanks for your response>> I got machines running continuously for long time, but then the underlying ext3 file systems becomes quite heavily fragmented (94% non-contiguous).> Note that non-contiguous does not necessarily mean fragmented. Files > that are larger than a block group will be non-contiguous by > definition. On the other hand if you have more than one file > simultaneously being written to in a directory, then yes the files > will certainly get fragmented.Yeah, the reading|writing speed is about 10 times slower. the original speed is about 20-40MB/s, while now it is about only 1-5MB/s. We normally write the disk to >90% full, then delete lots of files, write new files again to about >90%, then delete again.>There rae no on-line defrag tools right now, sorry.So I have to use gtar to save data, then re-create new file system, at last use gtar to copy data back? That will take long time and have to stop existing processes on the machines. Definitely that is the last solution I will consider. Are there any plans to develop a tool to defrag ext3 file systems on-line? Thanks.
Guolin Cheng
2004-Mar-04 02:36 UTC
heavily fragmented file system.. How to defrag it on-line??
Hi, Andreas, Thanks for your quick respone.> Why do you think you need to defragment? Do you notice performance loss, or > is it just because of the e2fsck number? Given that you have only 1225 files > and 50GB of space usage it is almost guaranteed that each file will not be > contiguous.Then How can I figure out whether the files are defragmented? Because the file system's read/write speed is greatly slow down ( about 8-10 times slower in extreme cases). Can you suggest a tool|package to report file systems' defragment percentage? I tried a beta version oodcmd tool which reports both block defragment percentage and inode defragment percentage, are those enough? or there are still more defragment characteristics? I'm a little hesitate to use the commercial oodcmd tool since it can only work when file systems are unmounted and idle.. sigh.. Thanks. --Guolin Cheng