On Tue, Jun 05, 2001 at 02:24:33PM -0400, Robert Dege
wrote:>
> /dev/sdb1 has reached maximal mount count, check forced.
>
> Is this normal? I thought JFS's were supposed to alleviate this check,
or
> is that just for fs corruption?
The only reason why we do this check at all even for ext2 is paranoia
about bad IDE cables, cheap PC drives, etc., causing hardware errors,
and kernel bugs of differing kinds (not necessarily even in filesystem
code!) that might cause corruption to sneak into a filesystem; it's
the equivalent of Windows users running Norton Disk Doctor from time
to time just to make sure nothing bad has slipped in.
If you're running a stable production kernel and using quality
hardware that you're confident won't introduce any problems, then do
feel free to use tune2fs to increase the intervals between checks
(either as a function of time or number of times the filesystem has
been mounted) to a level that you feel confident in.
Note that one trick used by newer versions of mke2fs is if you stagger
the mount counts so that they're different on your various
filesystems, then you won't check all the filesystems at one boot,
which makes the process much more pleasant.
Another trick you can play if you're using LVM and ext3 is to schedule
a cron job every month or so which takes take a snapshot of the ext3
filesystem, and then run e2fsck -n on the read-only snapshot in the
wee hours of the morning. If the e2fsck turns up any problems, then
the script can send mail to the administrator informing him that
there's a problem, and that downtime should be scheduled so that the
filesystem can be unmounted and fixed.
Assuming perfect hardware and software, of course, these measures
should never prove necessary, but since hardware and software aren't
perfect, and data is generally very valuable, it's wise to do periodic
checks.:-)
- Ted