Hey all, Ok, so I've been having some trouble for a while with an EC2 instance running CentOS 5.11 with a disk volume reporting 100% usage. Root is on an EBS volume. So I've tried the whole 'du -sk | sort -nr | head -10' routine all around this volume getting rid of files. At first I was getting rid of about 50MB of files. Yet the volume remains at 100% capacity. Thinking that maybe the OS was just not letting go of the inodes for the files on the disk, I attempted rebooting the instance. After logging in again I did a df -h / on the root volume. And look! Still at 100% capcity used. Grrr.... Ok so I then did a du -h on the /var/www directory, which was mounted on the root volume. And saw that it was gobbling up 190MB of disk space. So then I reasoned that I could create an EBS volume, rsync the data there, blow away the contents of /var/www/* and then mount the EBS volume on the /var/www directory. So I went through that exercise and lo and behold. Still at 100% capacity. Rebooted the instance again. Logged in and.. still at 100% capacity. Here's how the volumes are looking now. [root at ops:~] #df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 9.9G 9.3G 49M 100% / none 312M 0 312M 0% /dev/shm /dev/sdi 148G 116G 25G 83% /backup/tapes /dev/sdh 9.9G 385M 9.0G 5% /backup/tapes/bacula-restores /dev/sdf 9.9G 2.1G 7.4G 22% /var/lib/mysql fuse 256T 0 256T 0% /backup/mysql fuse 256T 0 256T 0% /backup/svn /dev/sdg 197G 377M 187G 1% /var/www There are some really important functions I need this volume to perform that it simply can't because the root volume is at 100% capacity. Like the fact that neither mysql nor my backup program - bacula will even think of starting up and functioning! I'm at a loss to explain how I can delete 190MB worth of data, reboot the instance and still be at 100% usage. I'm at my wits end over this. Can someone please offer some advice on how to solve this problem? Thanks Tim -- GPG me!! gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
On Sat, 28 Feb 2015 01:46:15 -0500 Tim Dunphy wrote:> /dev/sda1 9.9G 9.3G 49M 100% /49mb out of 9.9gb is less than one-half of one percent, so the df command is probably rounding that up to 100% instead of showing you 99.51%. Whatever is checking for free disk space is likely doing the same thing. -- MELVILLE THEATRE ~ Real D 3D Digital Cinema ~ www.melvilletheatre.com
On 2/27/2015 10:46 PM, Tim Dunphy wrote:> I'm at a loss to explain how I can delete 190MB worth of data, reboot the > instance and still be at 100% usage.190MB is less than one percent of 9.9GB aka 9900MB BTW, for cases like this, I'd suggest using df -k or -m rather than -h to get more precise and consistent values. also note, Unix (and Linux) file systems usually have a reserved freespace, only root can write that last bit. most modern file systems suffer from severe fragmentation if you completely fill them. ext*fs, you adjust this with `tune2fs -m 1 /dev/sdXX`. XFS treats these reserved blocks as inviolable, so they don't show up as freespace, they can be changed with xfs_io but should be modified at your own risk. -- john r pierce 37N 122W somewhere on the middle of the left coast
Hey guys, Thanks for this response. I just wanted to get back to you to let you know how I was able to resolve this. And yeah I think it's more informative to use df -m or df -k, so I'll try to stick to that from now on. Especially when posting to the lists. But I took a look around on the disk and saw that the /var/ww and /usr/local directories were the biggest. So I just solved this problem that you can only seem to do this easily on AWS. I grabbed the smallest EBS volumes that I could use (1GB for www and 2GB for /usr/local) respectively to use for those directories. 1GB being the smallest EBS volume you can get. So like I said earlier, I had around 195MB of data in /var/www and about 1.5GB of date in /usr/local. So I just mounted them on /mnt/www and /mnt/local and rsynced hte contents of those directories there. Blew away the contents of the original directories with rm -rf (scary but I was very careful while doing this). Then re-mounted them on those original paths. And voila! [root at ops:~] #df -m Filesystem 1M-blocks Used Available Use% Mounted on /dev/sda1 10080 8431 1546 85% / none 312 0 312 0% /dev/shm /dev/sdi 151190 122853 20658 86% /backup/tapes /dev/sdh 10080 385 9183 5% /backup/tapes/bacula-restores /dev/sdf 10080 2064 7504 22% /var/lib/mysql fuse 268435456 0 268435456 0% /backup/mysql fuse 268435456 0 268435456 0% /backup/svn /dev/sdj 1008 223 735 24% /var/www /dev/sdk 2016 1335 579 70% /usr/local Problem solved. So right now my root EBS volume is down to about 85% used instead of 100% used. Maybe a little unconventional, but at least it got the job done. Thanks again, guys! Tim On Sat, Feb 28, 2015 at 2:46 AM, John R Pierce <pierce at hogranch.com> wrote:> On 2/27/2015 10:46 PM, Tim Dunphy wrote: > >> I'm at a loss to explain how I can delete 190MB worth of data, reboot the >> instance and still be at 100% usage. >> > > 190MB is less than one percent of 9.9GB aka 9900MB > > BTW, for cases like this, I'd suggest using df -k or -m rather than -h to > get more precise and consistent values. > > > also note, Unix (and Linux) file systems usually have a reserved > freespace, only root can write that last bit. most modern file systems > suffer from severe fragmentation if you completely fill them. ext*fs, you > adjust this with `tune2fs -m 1 /dev/sdXX`. XFS treats these reserved blocks > as inviolable, so they don't show up as freespace, they can be changed with > xfs_io but should be modified at your own risk. > > > > > > -- > john r pierce 37N 122W > somewhere on the middle of the left coast > > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos >-- GPG me!! gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
Possibly Parallel Threads
- disk space trouble on ec2 instance
- disk space trouble on ec2 instance
- Amazon EC2 - building a minimal centOS ebs bootable image
- Re-attaching zpools after machine termination [amazon ebs & ec2]
- Error while bootstrapping a new instance with my puppet master on EC2