Hello, I have a pair of servers that act as SMTP/AV gateways. It seems that even though we've told the AV software not to store messages, it is anyway. They've been running for a little while now - and recently we've noticed a lot of disk space disappearing. Shortly after that, a simple du into our /var/spool returned a not so nice error: du: fts_read: Cannot allocate memory No matter what command I run on that directory, I just don't seem to have enough available resources to show the files let alone delete them (echo *, ls, find, rm -rf, etc.) I'm hoping someone else here might have a suggestion as to what I can do to fix this. Thanks, Phillip Salzman
In the last episode (Jan 19), Phillip Salzman said:> I have a pair of servers that act as SMTP/AV gateways. It seems that > even though we've told the AV software not to store messages, it is > anyway. > > They've been running for a little while now - and recently we've > noticed a lot of disk space disappearing. Shortly after that, a > simple du into our /var/spool returned a not so nice error: > > du: fts_read: Cannot allocate memory > > No matter what command I run on that directory, I just don't seem to > have enough available resources to show the files let alone delete > them (echo *, ls, find, rm -rf, etc.)Try raising your datasize rlimit value; also see the thread "Directories with 2 million files" at http://lists.freebsd.org/pipermail/freebsd-current/2004-April/026170.html for some other ideas. "find . | xargs rm" sounds promising. -- Dan Nelson dnelson@allantgroup.com
On Wed, 2005-Jan-19 21:30:53 -0600, Phillip Salzman wrote:>They've been running for a little while now - and recently we've noticed a >lot of disk space disappearing. Shortly after that, a simple du into our >/var/spool returned a not so nice error: > > du: fts_read: Cannot allocate memory > >No matter what command I run on that directory, I just don't seem to have >enough available resources to show the files let alone delete them (echo *, >ls, find, rm -rf, etc.)I suspect you will need to write something that uses dirent(3) to scan the offending directory and delete (or whatever) the files one by one. Skeleton code (in perl) would look like: chdir $some_dir or die "Can't cd $some_dir: $!"; opendir(DIR, ".") or die "Can't opendir: $!"; while (my $file = readdir(DIR)) { next if ($file eq '.' || $file eq '..'); next if (&this_file_is_still_needed($file)); unlink $file or warn "Unable to delete $file: $!"; } closedir DIR; If you've reached the point where you can't actually read the entire directory into user memory, expect the cleanup to take quite a while. Once you've finished the cleanup, you should confirm that the directory has shrunk to a sensible size. If not, you need to re-create the directory and move the remaining files into the new directory. -- Peter Jeremy
> > From: "Phillip Salzman" <phill@sysctl.net> > Subject: Very large directory> I have a pair of servers that act as SMTP/AV gateways. It > seems that even though we've told the AV software not to store > messages, it is anyway. > > They've been running for a little while now - and recently we've > noticed a lot of disk space disappearing. Shortly after that, a > simple du into our /var/spool returned a not so nice error:> du: fts_read: Cannot allocate memory> No matter what command I run on that directory, I just don't > seem to have enough available resources to show the files let > alone delete them (echo *, ls, find, rm -rf, etc.)Even echo * sorts the output, and the sorting consumes a large amount of resources. Try and ls with a "-f" option. To prove it to yourself take any directory and perform echo * and then do the same with ls -f I first noticed this years ago when on an old SysV I had a directory that took 5 minutes to display and the ls -f was quite fast.> End of freebsd-stable Digest, Vol 95, Issue 8-- Bill Vermillion - bv @ wjv . com
On Wed, 19 Jan 2005, Phillip Salzman wrote:> I have a pair of servers that act as SMTP/AV gateways. It seems that > even though we've told the AV software not to store messages, it is > anyway. > > They've been running for a little while now - and recently we've noticed > a lot of disk space disappearing. Shortly after that, a simple du into > our /var/spool returned a not so nice error: > > du: fts_read: Cannot allocate memory > > No matter what command I run on that directory, I just don't seem to > have enough available resources to show the files let alone delete them > (echo *, ls, find, rm -rf, etc.) > > I'm hoping someone else here might have a suggestion as to what I can do > to fix this.fts(3) is quite memory intensive--more though than strictly necessary for the functionality required by du(1). du is running into an administrative memory resource limit. Depending on the shell and login.conf configuration you're using, you may need to use the limits(1) command, limit(1), or tweak the user's class settings. I run into this on my boxes with 1,000,000 files or so in a directory, or with large directory trees (i.e., 7,000,000 files). Robert N M Watson