After an aptitude safe-upgrade of Debian's testing (as of today) my root file system (ext3) seems to have "filled up" and I'm not sure how to get Linux to correctly report the used size. The drive doesn't appear to be going out as the logs haven't indicated anything suspicious yet. smartmontools didn't show anything abnormal either. $ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 48062440 46976212 0 100% / tmpfs 2031948 0 2031948 0% /lib/init/rw udev 10240 96 10144 1% /dev tmpfs 2031948 0 2031948 0% /dev/shm /dev/sda6 332671516 72230148 243542600 23% /home overflow 1024 52 972 6% /tmp $ du -sh -x / 5.6G / $ cat /proc/mounts rootfs / rootfs rw 0 0 ... /dev/sda1 / ext3 rw,errors=remount-ro,data=ordered 0 0 $ tune2fs -l /dev/sda1 tune2fs 1.41.3 (12-Oct-2008) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: c565110d-be25-4655-b173-178b9c1a3032 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 3055616 Block count: 12207384 Reserved block count: 610369 Free blocks: 271557 Free inodes: 2519269 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1021 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Filesystem created: Sun Oct 26 13:12:33 2008 Last mount time: Sat Dec 20 17:04:53 2008 Last write time: Sat Dec 20 17:04:53 2008 Mount count: 1 Maximum mount count: 24 Last checked: Sat Dec 20 16:56:52 2008 Check interval: 15552000 (6 months) Next check after: Thu Jun 18 17:56:52 2009 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Journal inode: 8 Default directory hash: tea Directory Hash Seed: 57f7b146-08dd-4b8b-884e-31df7ee54afa Journal backup: inode blocks $ uname -a Linux an 2.6.26-1-amd64 #1 SMP Mon Dec 15 17:25:36 UTC 2008 x86_64 GNU/Linux I've looked for large files/directories via find (-type d/f -size +1G) and fsck'ing the partition multiple times with various options, but no luck. I tried copying a file large enough to have cp abort due to being out of disk space as well. Is there anything else I can do to besides reinstalling? Adam
On Sat, Dec 20, 2008 at 18:37:41 -0600, Adam Flott <adam at npjh.com> wrote:> After an aptitude safe-upgrade of Debian's testing (as of today) my root file > system (ext3) seems to have "filled up" and I'm not sure how to get Linux to > correctly report the used size.Are you aware that there is space in file systems reserved for use only by root? That may explain your confusion. The purpose of the reserve is to allow a sysadm to allow some things to keep working even if a normal user fills up a file system. The size of the reserve on ext2/3 file systems can be changed with tune2fs.
On Sat, 20 Dec 2008, Adam Flott wrote:> $ df > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/sda1 48062440 46976212 0 100% /So, "/" is really ~45 GB in total, but:> $ du -sh -x / > 5.6G /du(1) counts only 5,6 GB? Hm, first thing that comes to mind are of course (stale) open files, which cannot be found with find(1) any more and are not freed to the fs, so df(1) does not know about it. I usually use "lsof -ln | grep deleted", but that'd be a *lot* of large, open files.> Block count: 12207384 > Reserved block count: 610369This reserve would sum up to ~2,3 GB, but this still does not explain the difference to 45 GB. Hm.> I've looked for large files/directories via find (-type d/f -size +1G) and > fsck'ing the partition multiple times with various options, but no luck.And you unmounted or at least remounted r/o the partition for the fsck, so the open files should not even be an issue here. Strange indeed...sorry to be of no help here... C. -- BOFH excuse #39: terrorist activities