I have a test 4.x-stable system that I recently rebooted into.
The last time I had updated it was May 3rd. I cvsup'ed it,
did the buildworld/installworlds, and everything seemed fine.
I then thought I would update all the ports. When upgrading
XFree86, the /usr partition ran out of disk space. Now the
partition shows up as:
(21) df -k
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/ad0s2a 251950 35550 196244 15% /
/dev/ad0s2f 241870 10 222512 0% /tmp
/dev/ad0s2g 2064302 -164180 2063338 -9% /usr
/dev/ad0s2e 257998 61258 176102 26% /var
/dev/ad0s2h 1311026 676644 529500 56% /Users
This has persisted through a bunch of 'sync's and a system
reboot. Actually it had started as -1% capacity, but went
to -9% as I removed files (like all of /usr/obj/usr/src ).
I have some other process running right now, but when that is
done I'm going to shutdown and then run fsck on that partition.
I assume that will clear it up.
This is on a dual-CPU system, if that is significant. The
partition is mounted:
/dev/ad0s2g on /usr (ufs, local, soft-updates)
This is only a test system, so it isn't much of a problem for
me. I just thought that it was odd enough that I should
mention it. Has anyone else seen behavior like this?
--
Garance Alistair Drosehn = gad@gilead.netel.rpi.edu
Senior Systems Programmer or gad@freebsd.org
Rensselaer Polytechnic Institute or drosih@rpi.edu
Karel J. Bosschaart
2003-Jul-29 07:20 UTC
Strange results after partition-full condition...
On Tue, Jul 29, 2003 at 12:55:26AM +0200, Garance A Drosihn wrote:> I have a test 4.x-stable system that I recently rebooted into. > The last time I had updated it was May 3rd. I cvsup'ed it, > did the buildworld/installworlds, and everything seemed fine. > > I then thought I would update all the ports. When upgrading > XFree86, the /usr partition ran out of disk space. Now the > partition shows up as: > > (21) df -k > Filesystem 1K-blocks Used Avail Capacity Mounted on > /dev/ad0s2a 251950 35550 196244 15% / > /dev/ad0s2f 241870 10 222512 0% /tmp > /dev/ad0s2g 2064302 -164180 2063338 -9% /usr > /dev/ad0s2e 257998 61258 176102 26% /var > /dev/ad0s2h 1311026 676644 529500 56% /Users > > This has persisted through a bunch of 'sync's and a system > reboot. Actually it had started as -1% capacity, but went > to -9% as I removed files (like all of /usr/obj/usr/src ). >A while back I've also seen weird numbers on one of my partitions: despite that minfree was 8% (default) the "Capacity" went to ~130%, and the "Used" part became larger than the number of 1K-blocks. This was on -stable, soft-updates enabled. 'sync's didn't have any effect either.> I have some other process running right now, but when that is > done I'm going to shutdown and then run fsck on that partition. > I assume that will clear it up. >Yes, in my case it solved the problem.> This is on a dual-CPU system, if that is significant. The > partition is mounted: > /dev/ad0s2g on /usr (ufs, local, soft-updates) > > This is only a test system, so it isn't much of a problem for > me. I just thought that it was odd enough that I should > mention it. Has anyone else seen behavior like this? >I've seen it, but no idea how to reproduce above situation. I don't know if it is related, but when using a USB flash drive (UFS formatted, no soft-updates, minfree=0), I often see after mounting that the free space is incorrect; it shows the free space it had before I deleted some files, during a previous mount. Unmounting and fsck'ing solves it. Usually it happens when the stick is unmounted in -stable and mounted on -current. Although mounting in -stable shows the correct numbers, an fsck (also on -stable) reveals "SUMMARY INFORMATION BAD". Is this an indication that the filesystem was left in a dirty state? Both -stable and -current mount it anyway, but only on -current it is evident from the numbers that something is wrong. (After unmounting I always wait some time before disconnecting the flash drive, at least until the LED doesn't flash anymore, but usually much longer). Karel.