Hey folks,
We have Munin set up for longer term performance monitoring stuff, and
it has been extremely useful to us for what it does. However, it is
hard coded to poll systems at 5 minute intervals, which of course
proves not to be so useful when you have to dig down into more detail
on something.
What we typically do for load tests where we need more detail, is run
this command :
/usr/lib64/sa/sadc -d -I -F 2 /var/log/foo/bar
Which of course logs data every 2 seconds.
We now have a problem with our PostgreSQL server and want to set
something up to record sadc data more frequently than the 1 minute
limitation of cron. Though 2 seconds is probably a bit much, we are
thinking more like 5 or 10 seconds. So the above command would be
ideal.
Except ... what if it fills up the disk?
I looked at the man page and do not see any obvious way to get it to
write out to a file of given size, and just keep overwriting the
oldest data in that file. That way would could pre-allocate a big
file of a given size, and be able to store the last X minutes of sadc
data. So when the PG system crashes again (sigh), we can review the
data for the last X minutes before the crash.
One solution I could think of is if there is some kind of filesystem
to implement this sort of circular file.
Any ideas?
Any other obvious solution I am missing?
I can think of ways to do the next best thing in a script e.g.
alternate between two files and switch back and forth
programmatically.
thanks,
-Alan
--
?Don't eat anything you've ever seen advertised on TV?
- Michael Pollan, author of "In Defense of Food"