New ZFS user here. NexSAN Satabeast with 42 500G Sata drives of course, Dual Channel 4G Fiber. Fibre network attached to Solaris 10 running on HP Blade. Possibly will add second solaris blade for failover. Now I am looking for reliability with decent disk space totals. I would prefer not to lose 50% of the space to redundancy. This will be used for a ZFS NFS server with 100+ clients attached doing mostly reads, but some writes. It appears this devices does not have the ability to expose raw disks to the OS. Instead you have Raid Groups. Then you create volumes from those RAID Groups that can be exposed to the OS. There I was thinking of the following 8 Hardware RAID-5 Groups ( 5 drives each) and 2 SAN hot spares. zraid of these 8 Raid Groups. ~ 14TB usable. I did read in a FAQ that doing double redundancy is not recommended since parity would have to be calculated twice. I was wondering what the alternatives are here. Not doing ZFS redundancy means I lose the checksum abilities. Is that a good trade off instead of doing the double redundancy? Thanks Reed -- This message posted from opensolaris.org
On Tue, 16 Dec 2008, Reed Gregory wrote:> 8 Hardware RAID-5 Groups ( 5 drives each) and 2 SAN hot spares. > zraid of these 8 Raid Groups. ~ 14TB usable. > > I did read in a FAQ that doing double redundancy is not recommended > since parity would have to be calculated twice. I was wondering > what the alternatives are here.Parity calculations are in the noise. You are reading the wrong FAQs. It is likely that if you take care that you can carve out individual disks as RAID-0 volumes. Then you can provide ZFS with individual access to all the disks and ZFS can do the RAID. That is what I did. Not one problem in 11 months.> Not doing ZFS redundancy means I lose the checksum abilities. Is > that a good trade off instead of doing the double redundancy?ZFS checksums are independent of redundancy. Without redundancy ZFS is not able to automatically repair the bad data. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Yeuch. Normally I''d just say let ZFS run raid-6 for your dual parity, but if you can''t expose disks I guess the solution you have isn''t too bad. The downside of course is that if you ever do loose one of your raid5 groups, ZFS is going to have quite a lot of data to resilver. However, you definately want ZFS to be managing at least some redundancy so I actually think you''ve got a reasonable setup there. It''s not perfect, but given the limitations you''re having to deal with, I think I''d be happy running that. -- This message posted from opensolaris.org
Ross wrote:> Yeuch. Normally I''d just say let ZFS run raid-6 for your dual parity, but if you can''t expose disks I guess the solution you have isn''t too bad. > > The downside of course is that if you ever do loose one of your raid5 groups, ZFS is going to have quite a lot of data to resilver. However, you definately want ZFS to be managing at least some redundancy so I actually think you''ve got a reasonable setup there. It''s not perfect, but given the limitations you''re having to deal with, I think I''d be happy running that.I suggest you look at using a JBOD such as the J4500 instead. http://www.sun.com/storage/disk_systems/expansion/4500/index.xml That way, ZFS can do the right thing without someone else''s hardware RAID getting in the way. -- Andrew
Bob Friesenhahn wrote:> On Tue, 16 Dec 2008, Reed Gregory wrote: > >> 8 Hardware RAID-5 Groups ( 5 drives each) and 2 SAN hot spares. >> zraid of these 8 Raid Groups. ~ 14TB usable. >> >> I did read in a FAQ that doing double redundancy is not recommended >> since parity would have to be calculated twice. I was wondering >> what the alternatives are here. > > Parity calculations are in the noise. You are reading the wrong FAQs. > It is likely that if you take care that you can carve out individual > disks as RAID-0 volumes. Then you can provide ZFS with individual > access to all the disks and ZFS can do the RAID. That is what I did. > Not one problem in 11 months. > >> Not doing ZFS redundancy means I lose the checksum abilities. Is >> that a good trade off instead of doing the double redundancy? > > ZFS checksums are independent of redundancy. Without redundancy ZFS > is not able to automatically repair the bad data. > > BobThe older model ATABeast could only do 32 luns per controller I think but the specs say that the SATABeast: Supports up to 256 LUNS per controller. So you could do a separate RAID 0 lun per drive (as Bob suggests above) if the interface allows that - it should. You could do 6 raidz2 vdevs of 6 disks each, 2 spares and a couple of cache disks or 8 raidz2 vdevs of 5 drives each and 2 spares. Or if that doesn''t work, what about doing 10 RAID 0 groups of 4 drives each on the beast (leaving the 2 spares) and then doing 2 raidz2 vdevs (5 RAID 0 groups each) from those. The beast software will send error reports on failing drives, etc so be sure to turn all those bells and whistles on - I think I had the built in testing run once per month. And Nexsan overnighted a drive when I needed a replacement. Some of the Nexsan systems are listed as offering JBOD, but it is not listed on the SATABeast specs. They should support JBOD to cater to zfs users. I configured mine to spin down the drives when they were not in use and that feature worked well*, esp if you only backups at night or have a user base that is limited in their work hours. * except for the abject terror of typing ls and having nothing happen for a few long seconds...