> -----Original Message-----
> From: centos-bounces at centos.org
> [mailto:centos-bounces at centos.org] On Behalf Of chrism at imntv.com
> Sent: Wednesday, September 13, 2006 10:01 AM
> To: CentOS mailing list
> Subject: [CentOS] benchmarking large RAID arrays
>
> I'm just wondering what folks are using to benchmark/tune
> large arrays
> these days. I've always used bonnie with file sizes 2-3
> times physical
> RAM. Maybe there's a better way?
>
> Cheers,
>
> _______________________________________________
To all on the list needing a disk test program:
I just uploaded this source file so I could share it with the list:
http://www.integratedsolutions.org/downloads/disktest.c
The link above is a program we use for testing disks and RAID arrays.
You will need to compile it on your linux system.
gcc -O2 disktest.c -o disktest
The program will allow for multiple loops, different number
of 1 GB files and will even test your array to find the
fastest write size between 512 bytes and 1 MB.
Reading the source will show that we used fsync.
The program gets realistic meaningful results.
It also allows for reading every sector on a disk or an array
so that the storage can be thoroughly tested before going
into production. (zcav like function)
The help looks like this:
Disktest, Version 2.02, integratedsolutions.org, GNU GPL
This program tests the disk read and write speeds
USAGE: disktest [options]
Options: -l loops number of times to loop the program
(default=1),
-g gigs number of gigs to use (1 file per GB)
(default=1);
-t test have program determine buffer read/write size
(default=8192);
-z run zcav mode (read 100 meg chunks from user
defined file);
-f file_name file or device to read from when in zcav mode;
-V display version and exit;
-h display this help and exit.
It also has a looping zcav type function for testing new arrays
multiple times.
The command line for this would look like:
disktest -l6 -z -f /dev/sda
This will read sda from the beginning to the end, 6 times
The performance test output is much more readable than bonnie++
(that?s why we wrote it)
and it will average the times from different loops and give
you an average read and write speed.
Make sure you use a -g option that is at least 50% bigger
than the amount of RAM in the machine.
IE: 4 GB of RAM in the machine -> use -g6 or higher.
A typical command line looks like (machine has 4GB RAM)
disktest -t -g6 -l4
This will write and read 6 x 1GB files in the local directory, it will
test for the fastest write buffer size and it will do 4 loops of this
test and average the results.
Let me know if you have questions about it.
Seth Bardash
Integrated Solutions and Systems
1510 Old North Gate Road
Colorado Springs, CO 80921
719-495-5866
719-495-5870 Fax
719-337-4779 Cell
http://www.integratedsolutions.org
Failure can not cope with knowledge and perseverance!
--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.12.4/448 - Release Date:
9/14/2006