Hey all, I am working on a SAN server for my office and would like to know about hardware recommendations. I am quite confused as to go the raidz route or a standard raid route. As for what this will be doing, I will be having a vmware esxi server connected via iscsi and it will be running multiple servers, as of today for sure Exchange 2007 server, Blackberry enterprise server, several linux servers, running mysql databases and web servers. The options known to me thus far are: a. a large raidz array or several raidz arrays b. a hardware raid 10 array for exchange 2007 and then raidz arrays for everything else. c. several hardware raid 10 arrays d. none of the above Once I find this what is the average rebuild time on raidz say a 1tb sata disk. And finally what kinds of cards will I get the best performance from for the use of raidz. Or does it not really matter. Thanks, Greg -- This message posted from opensolaris.org
On Mon, 22 Jun 2009, Greg wrote:> a. a large raidz array or several raidz arrays > b. a hardware raid 10 array for exchange 2007 and then raidz arrays for everything else. > c. several hardware raid 10 arrays > d. none of the aboveI think that you will find that ZFS''s equivalent of RAID 10 (load-shared mirrors) works very well. I can''t imagine any reason to rely on a RAID array to do the mirroring since ZFS will do it smarter, better, and more reliably. If you go for raidz for bulk storage, then you should consider using raidz2 for the extra security and peace of mind.> Once I find this what is the average rebuild time on raidz say a 1tb > sata disk. And finally what kinds of cards will I get the best > performance from for the use of raidz. Or does it not really matter.I don''t know if there is any average time for this. The time depends on how full the disk is and how fragmented the metadata and user data blocks are. It also depends quite a lot on the disks you use. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Oh boy, there are a lot of things here :) How many people in your office will be using these services? If it are just 50 people or so, you would probably be fine with just about any configuration. 500 or 5000 would be a different story, and you would have to be much more careful. If possible, you should measure what your requirements are on existing load. For example, figure out what kind of I/O demands are being generated by each application. It would also be nice to understand the I/O patters, i.e. IO size, read/write ratio, randomness. Then you can use a tool like iometer to simulate your load on the ZFS box from a VM on the ESX box, and figure out what raid sets are required to support your load. Another rough rule of thumb you can use is to sum the read and write IOPS at the 95 percentile (take a good guess), and use this equation to determine how many IOs you will need to support: RAID 10 1 * measured read IOPS + 2 * measured write IOPS RAID 5 1 * read IOPS + 4 * write IOPS Each SATA disk can support in the 100 IOPS range (considering a lot of the IOs are random), and a 15k SAS disk around 200 IOPS. So if your total app load was 500 read and 500 write IOPS, then: raid 10: 500 + 2*500 = 1500 ~10-15 SATA disks ~ 6-8 15k SAS disks raid 5: 500 + 500*4 = 2500 ~20-25 SATA disks ~12-14 SAS disks These are *really* rough numbers, and conservative. Honestly you could spend ages on an IO study like this. There is a ZFS best practices guide out there which makes good reading, and talks to the pros/cons of the different raid types offered by ZFS, how many disks in a single raidz, etc. I think most people using ZFS go with the software raid sets to take advantage of checksums and self healing. Performance on modern hardware is fine. Regarding cards, if you are not going to do HW raid, just get a SAS HBA. It makes life so much simplier. LSI makes a good selection of them. No RAID functionality, just good, fast IO. Attach that to a SAS JBOD, and you can mix and match sata and sas drives to fid your application. If you want to go HW raid, try to get a card that supports JBOD mode so you can use software raid if you change your mind. -Scott -- This message posted from opensolaris.org
Thank you both of you! I am going to look at these guides and begin tweaking as soon as I have some hardware in. Users wise it will be less then 100 in the immediate future however I am planning for expansion. Do you recommend using Solaris 10 or opensolaris? I know that opensolaris is the breaking edge of everything so I was thinking of using it. Thanks! Greg -- This message posted from opensolaris.org
For ~100 people, I like Bob''s answer. RAID 10 will get you lots of speed. Perhaps RAID50 would be just fine for you as well and give your more space, but without measuring, you won''t be sure. Don''t forget a hot spare (or two)! Your MySQL database - will that generate a lot of IO? Also, to ensure you can recover from failures, consider separate pools for your database files and log files, both for MySQL and Exchange. Good luck! -Scott -- This message posted from opensolaris.org