Some people use ZFS and HardWare raid. The recommended is to use only ZFS and HW raid. What are the advantages/disadvantages? Should you use a HW raid in conjunction with ZFS, if possible? Or only ZFS? Which is best? -- This message posted from opensolaris.org
Orvar Korvar wrote:> Some people use ZFS and HardWare raid. The recommended is to use only ZFS and HW raid. > > What are the advantages/disadvantages? Should you use a HW raid in conjunction with ZFS, if possible? Or only ZFS? Which is best? >There are many paths to the top of the mountain. To answer this question, you need to look at the features you desire beyond simply having redundancy. In other words, comparing just "HW" RAID-1 with ZFS mirroring won''t be sufficient to arrive at an answer. -- richard
[having late lunch hour for beloved Orvar] one more baby scenario for your consideration -- you can give me some ZFS based codes and I will go to china and burn some HW RAID ASICs to fulfill your desire? best, z ----- Original Message ----- From: "Richard Elling" <Richard.Elling at Sun.COM> To: "Orvar Korvar" <knatte_fnatte_tjatte at yahoo.com> Cc: <zfs-discuss at opensolaris.org> Sent: Monday, January 12, 2009 11:14 AM Subject: Re: [zfs-discuss] ZFS vs ZFS + HW raid? Which is best?> Orvar Korvar wrote: >> Some people use ZFS and HardWare raid. The recommended is to use only ZFS >> and HW raid. >> >> What are the advantages/disadvantages? Should you use a HW raid in >> conjunction with ZFS, if possible? Or only ZFS? Which is best? >> > > There are many paths to the top of the mountain. > > To answer this question, you need to look at the features you desire > beyond > simply having redundancy. In other words, comparing just "HW" RAID-1 > with ZFS mirroring won''t be sufficient to arrive at an answer. > -- richard > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 12-Jan-09, at 3:43 PM, JZ wrote:> [having late lunch hour for beloved Orvar] > > one more baby scenario for your consideration -- > you can give me some ZFS based codes and I will go to china and > burn some HW > RAID ASICs to fulfill your desire?Is that what passes for product development these days? --T> > best, > z >
Not really. In chinatown, we are trying to do some ZFS home based NAS, no need to burn anything. best, z ----- Original Message ----- From: "Toby Thain" <toby at telegraphics.com.au> To: "JZ" <jz at excelsioritsolutions.com> Cc: "Richard Elling" <Richard.Elling at Sun.COM>; "Orvar Korvar" <knatte_fnatte_tjatte at yahoo.com>; <zfs-discuss at opensolaris.org> Sent: Monday, January 12, 2009 5:04 PM Subject: Re: [zfs-discuss] ZFS vs ZFS + HW raid? Which is best?> > On 12-Jan-09, at 3:43 PM, JZ wrote: > >> [having late lunch hour for beloved Orvar] >> >> one more baby scenario for your consideration -- >> you can give me some ZFS based codes and I will go to china and burn >> some HW >> RAID ASICs to fulfill your desire? > > Is that what passes for product development these days? > > --T > >> >> best, >> z >>
Ok, I draw the conclusion that there is no consensus on this. Nobody really knows for sure. I am in the process of converting some Windows guys to ZFS, and they think that HW raid + ZFS should be better than only ZFS. I tell them they should ditch their HW raid, but can not really motivate why. That is why I am asking this question. And noone at SUN does really know, it seems. Ive asked this in another thread, no answer. I will tell people that ZFS + HW raid is good enough, and I will not recommend against HW raid anymore. -- This message posted from opensolaris.org
>>>>> "ok" == Orvar Korvar <knatte_fnatte_tjatte at yahoo.com> writes:ok> Nobody really knows for sure. ok> I will tell people that ZFS + HW raid is good enough, and I ok> will not recommend against HW raid anymore. jesus, ok fine if you threaten to let ignorant Windows morons strut around like arrogant experts, voicing some baseless feel-good opinion and then burdening you with proving a negative, then I''ll take the bait and do your homework for you. What people ``don''t really know for sure'''' is what hardware RAID you mean. There are worlds of difference between NetApp RAID and Dell PERC RAID, differences in their ability to avoid data loss in common failure scenarios, not just in some bulleted features or admin tool GUI''s that the Windows admins understand. For example among dirt-cheap RAID5''s, whether or not there''s a battery-backed write cache influences how likely they are to corrupt the filesystem on top, while the Windows admins probably think it''s for ``performance issues'''' or something. And there are other things. NetApp has most of them, so if you read a few of the papers they publish bragging about their features you''ll get an idea how widely the robustness of RAIDs can vary: http://pages.cs.wisc.edu/~krioukov/Krioukov-ParityLost.pdf Advantages of ZFS-on-some-other-RAID: * probably better performance for RAID6 than for raidz2. It is not because of ``checksum overhead''''. It''s because RAID6 stripes use less seek bandwidth than ZFS which mostly does full-stripe writes like RAID3. * It''s performant to use a filesystem other than ZFS on a hardware RAID, which maybe protects your investment somewhat. * better availability, sometimes. It''s important to distinguish between availability and data loss. Availability means when a disk goes bad, applications don''t notice. Data loss is about, AFTER the system panics, the sysadmin notices, unplugs bad drives, replaces things, reboots, is the data still there or not? And is *ALL* the data there, satisfying ACID rules for databases and MTA''s, or just most of the data there? because it is supposed to be ALL there. Even when you have to intervene, that is not license to break all the promises and descend into chaos. Some (probably most) hardware RAID has better availability, is better at handling failing or disconnected drives than ZFS. But ZFS on JBOD is probably better at avoiding data loss than ZFS on HWRAID. * better exception handling, sometimes. Really fantastically good hardware RAID can handle the case of two mirrored disks, both with unreadable sectors, but none of the unreadable sectors are in common. Partial disk failures are the common case, not the exception, while the lower-quality RAID implementations most of us are stuck with treat disks as either good or bad, as a whole unit. I expect most hardware RAIDs don''t have much grace in this department either, though. It seems really difficult to do it well because disks lock up when you touch their bad blocks, so to gracefully extract information from a partially-failed disk, you have to load special firmware into them and/or keep a list of poison blocks which you must never accidentally read. My general procedure for recovering hardware RAIDs is, (0) shut down the RAID and use as JBOD, (1) identify the bad disks and buy blank disks to replace them, (2) copy bad disks onto blank disks using dd_rescue or dd conv=noerror,sync, and (3) run the RAID recovery tool. There are a lot of bad things about this procedure! It denies the RAID layer access to the disk''s reports about which sector is bad, which leads to parity pollution, and silent corruption that slips through the RAID layer, discussed below. It can also not work, if the RAID is identifying disks like ZFS sometimes does, by devid/serialno rather than by a disk label or port number. but RAID layers read the report ``bad sector'''' as in fact a report ``bad disk!'''' so with the very cheap RAID''s I''ve used this 0,1,2,3 corruption-prone procedure which works around the bad exception handling saves more data than the supported procedure. * novolatile cache. Some cheap hardware RAID gives you a battery-backed write cache that can be more expensive to get the ZFS slog way. Advantages to ZFS-on-JBOD: * unified admin tool. There is one less layer, so you can administer everything with zpool. With hardware RAID you will have to use both zpool and the hardware RAID''s configurator. * most hardware RAID has no way to deal with silent corruption. It can deal with latent sector errors, when the drive explicitly reports a failed read, but it has no way to deal with drives that silently return incorrect data. There are a huge number of patterns in which this happens in real life according to Netapp, and they have names for them like ``torn writes'''' and ``misdirected writes'''' and so on. RAID5''s read-modify-write behavior can magnify this type of error through ``parity pollution.'''' http://www.usenix.org/event/fast08/tech/full_papers/bairavasundaram/bairavasundaram.pdf I think there are some tricks in Solaris for Veritas and SVM mirrors, which applications like Oracle can use if they do checksumming above the filesystem level. Oracle can say, ``no, I don''t like that block. Can you roll the dice again? What else might be at that lseek offset?'''' See DKIODMR under dkio(7I). I''m not sure in exactly what circumstance the tricks work, but am sure they will NOT work on hardware RAID-on-a-card. Also the tricks are not used by ZFS, so if you insert ZFS between your RAID and your application, the DKIODMR-style checksum-redundancy is blocked off. The idea is that you should use ZFS checksums and ZFS redundancy instead, but you can''t do that by feeding ZFS a single LUN. * hardware RAID ties your data to a RAID implementation. This is more constraining than tying it to ZFS because the Nexenta and Opensolaris licenses allow you to archive the ZFS software implementation and share it with your friends. This has bitten several of my friends particularly with ``RAID-on-a-card'''' products. It is really uncool to lose your entire dataset to a bad controller, and then be unable to obtain the same controller with the same software revision, because the hardware has gone through so many steppings, and the software isn''t freely redistributable, archiveable, or even clear that it exists at all. Sometimes the configurator tool is very awkward to run. There may even be a remedial configurator in the firmware and a full-featured configurator which is difficult to run just when you need it most, when recovering from a failure. Some setups are even worse, are unclear about where they are storing the metadata---on the diskset or in the controller? so you could lose the whole dataset because of a few bits in some $5 lithium battery RTC chip. Being a good sysadmin in this environment requires levels of distrust that are really unhealthy. The more expensive implementations hold your data hostage to a support contract, so while you are paying to buy the RAID you are in fact more like renting a place to put your data, and they reserve the right to raise your rent. Without the contract you don''t just lose their attention---you cannot get basic things like older software, or manuals for the software you have now, and they threaten to sue you for software license violation if you try to sell the hardware to someone else. * there are bugs either in ZFS or in the overall system that make ZFS much more prone to corruption when it''s not managing a layer of redundancy. In particular, we know ZFS does not work well when underlying storage reports writes are committed to disk when they''re not, and this problem seems to be rampant: * some SATA disks do it, but the people who know which ones aren''t willing to tell. They only say ``a Major vendor.'''' * Sun''s sun4v hypervisor does it for virtualized disk access on the T2000. * Sun''s iSCSI target does it (iscsitadm. not sure yet about Comstar.) * We think many PeeCee virtualization platforms like virtualbox and vmware and stuff might do it. * Linux does it if you are using disk through LVM2. The problem is rampant, seems to be more dangerous to ZFS than other filesystems, and progress in tracking it down and fixing it is glacial. Giving ZFS a mirror or a raidz seems to improve its survival. To work around this with hardware RAID, you need to make a zpool that''s a mirror of two hardware RAIDsets. This wastes a lot of disk. If you had that much disk with ZFS JBOD, you could make much better use of it as a filesystem-level backup, like backup with rsync to a non-ZFS filesystem or to a separate zpool with ''zfs send | zfs recv''. You really need this type of backup with zfs because of the lack of fsck and the huge number of panics and assertions. HTH. To my reckoning the consensus best practice is usually JBOD right now. When I defend ZFS-over-RAID5 it is mostly because I think the poor availability during failures and the corruption bugs discussed in the last point need to be tracked down and squashed. Here''s my list of papers that have been mentioned on this list, so you can catch up. You can also dump all the papers on the annoying Windows admins, and when they say ``I really think hardware RAID has fewer `issues'' because I just think so. It''s your job to prove why not,'''' then you can answer ``well have you read the papers? No? Then take my word for it.'''' If they doubt the papers'' authority, then cite the price the people writing the papers charge for their services---the Windows admins should at least understand that. http://www.usenix.org/event/fast08/tech/full_papers/bairavasundaram/bairavasundaram.pdf http://www.cs.wisc.edu/adsl/Publications/latent-sigmetrics07.ps http://labs.google.com/papers/disk_failures.html http://pages.cs.wisc.edu/~krioukov/Krioukov-ParityLost.pdf http://www.nber.org/sys-admin/linux-nas-raid.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090113/c19da220/attachment.bin>
I happen to have some 3510, 3511 on SAN, and older 3310 direct-attach arrays around here. Also some newer 2540 arrays. Our preferred setup for the past year or so, is 2 arrays available to the server.>From each array make 2 LUNS available. Take these LUNs on the server andZFS them as RAID-10. The implication of this is the the existing hardware RAID was leveraged for it''s buffering and it''s smarts in hot-sparing failed drives to the right LUN. Do you really want a drive in a 2nd chassis being a hot spare to drives in another chassis? Until ZFS gives a clean way to designate hot-spares go only with this group of disks and not others you have a dependency problem. How big of a deal this is arguable. As is for example our preference to "waste" 2 arrays in this manner to gain the redundancy that an entire drive chassis can fail without interrupting ops. YMMV. -- This message posted from opensolaris.org
Oh, thanx for your very informative answer. Ive added a link to your information in this thread: But... Sorry, but I wrote wrong. I meant "I will not recommend against HW raid + ZFS anymore" instead of "... recommend against HW raid". The windows people''s question is: which is better? 1. HW raid + ZFS 2. ZFS Ive told them that ZFS prefers to NOT have HW raid. But they ask me why they shouldnt use ZFS + HW raid and why they should only use ZFS. They are using ZFS no matter what. The question is, is ZFS that good, that HW raid can be omitted? Does HW raid + ZFS give any gains, compared to only ZFS? It seems that I have to recommend them to keep their HW raid when they try out ZFS? Ive gotten them interested in ZFS, enough to try it out. It was a lengthy discussion that took much time and patience. -- This message posted from opensolaris.org
So you recommend ZFS + HW raid, instead of only ZFS? It is preferable to add HW raid to ZFS? -- This message posted from opensolaris.org
Le 13 janv. 09 ? 21:49, Orvar Korvar a ?crit :> Oh, thanx for your very informative answer. Ive added a link to your > information in this thread: > > But... Sorry, but I wrote wrong. I meant "I will not recommend > against HW raid + ZFS anymore" instead of "... recommend against HW > raid". > > The windows people''s question is: > which is better? > 1. HW raid + ZFS > 2. ZFS >It''s not so much the HW raid which is helpful but the low latency writes to NVRAM that comes with it. And that is only helpful inasmuch as you don''t compare to a solution with a separate intent log over SSD or an NVRAM based lun.> Ive told them that ZFS prefers to NOT have HW raid. But they ask me > why they shouldnt use ZFS + HW raid and why they should only use ZFS. > > They are using ZFS no matter what. The question is, is ZFS that > good, that HW raid can be omitted? Does HW raid + ZFS give any > gains, compared to only ZFS? >I think they will have to compare the pluses and minuses of the 2 architectures and decide for themselves where they rather want to be. -r> It seems that I have to recommend them to keep their HW raid when > they try out ZFS? Ive gotten them interested in ZFS, enough to try > it out. It was a lengthy discussion that took much time and patience. > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Orvar Korvar wrote:> Oh, thanx for your very informative answer. Ive added a link to your information in this thread: > > But... Sorry, but I wrote wrong. I meant "I will not recommend against HW raid + ZFS anymore" instead of "... recommend against HW raid". > > The windows people''s question is: > which is better? > 1. HW raid + ZFS > 2. ZFS > > Ive told them that ZFS prefers to NOT have HW raid. But they ask me why they shouldnt use ZFS + HW raid and why they should only use ZFS. > > They are using ZFS no matter what. The question is, is ZFS that good, that HW raid can be omitted? Does HW raid + ZFS give any gains, compared to only ZFS? > > It seems that I have to recommend them to keep their HW raid when they try out ZFS? Ive gotten them interested in ZFS, enough to try it out. It was a lengthy discussion that took much time and patience. >As you can see, there isn''t a straight answer because there are so many possibilities. One golden rule that might help: Always let ZFS handle the top level redundancy so it can correct errors. Whether you export JBODs from a SAN and build a raidz or export RAID5 LUNs and build a ZFS mirror what matters is ZFS has redundancy. -- Ian.
I dunno man, just tell you what works well for me with the hardware I have here. If you can go out and buy all new equipment like 7000-series storage then you don''t need HW RAID. If you don''t need HA & clustering & all that jazz then just get a bunch of big drives and ZFS RAID-10 them. As a professional you will have to weigh the options, test as much as you are able, then document the consequences of your choice for future support people. -- This message posted from opensolaris.org
>>>>> "ok" == Orvar Korvar <knatte_fnatte_tjatte at yahoo.com> writes:ok> Does HW raid + ZFS give any gains, compared to only ok> ZFS? yes. listed in my post. ok> The question is, is ZFS that good, that HW raid can be ok> omitted? No, that is not The Question! You assume there are no downsides to using ZFS with hardware raid, while I listed quite a few serious ones in my post, including ``more likely to lose your data.'''' -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090113/5d2942d6/attachment.bin>
Vincent Fox wrote:> I dunno man, just tell you what works well for me with the hardware I have here. > > If you can go out and buy all new equipment like 7000-series storage then you don''t need HW RAID. >IMHO a 7000 system is "hardware" RAID: a controller providing data services. -- richard
OMG, we are still doing this Orvar thing? I am even sick of seafood now and had a 2X-BLT NYC style for lunch today, so nice I didn''t even bother checking on beloved Orvar... You open folks may not need 7000 because you dunno why it''s cool. If you pay for those, you will never have to be at this list playing with Orvar, you can just call up the toll-free 24X7 support line and magic will happen and your baby data will be happy! best, z, at home ----- Original Message ----- From: "Richard Elling" <Richard.Elling at Sun.COM> To: "Vincent Fox" <vincent_b_fox at yahoo.com> Cc: <zfs-discuss at opensolaris.org> Sent: Tuesday, January 13, 2009 7:49 PM Subject: Re: [zfs-discuss] ZFS vs ZFS + HW raid? Which is best?> Vincent Fox wrote: >> I dunno man, just tell you what works well for me with the hardware I >> have here. >> >> If you can go out and buy all new equipment like 7000-series storage then >> you don''t need HW RAID. >> > > IMHO a 7000 system is "hardware" RAID: a controller providing data > services. > -- richard > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The way I do IT, very much inline with Ian approach. I wouldn''t care too much about SW RAID or HW RAID, but only because I use a layer of real "application aware" storage tier to interface with the specific applications. [they pay other folks to do the RAID layer] But this approach is clearly out of your open budgets I guess. So I dunno if this is something you should be thinking about. [before really knowing which app needs what, otherwise this can be counter-effective, no joke] Best, z ----- Original Message ----- From: "Ian Collins" <ian at ianshome.com> To: "Orvar Korvar" <knatte_fnatte_tjatte at yahoo.com> Cc: <zfs-discuss at opensolaris.org> Sent: Tuesday, January 13, 2009 4:06 PM Subject: Re: [zfs-discuss] ZFS vs ZFS + HW raid? Which is best?> Orvar Korvar wrote: >> Oh, thanx for your very informative answer. Ive added a link to your >> information in this thread: >> >> But... Sorry, but I wrote wrong. I meant "I will not recommend against HW >> raid + ZFS anymore" instead of "... recommend against HW raid". >> >> The windows people''s question is: >> which is better? >> 1. HW raid + ZFS >> 2. ZFS >> >> Ive told them that ZFS prefers to NOT have HW raid. But they ask me why >> they shouldnt use ZFS + HW raid and why they should only use ZFS. >> >> They are using ZFS no matter what. The question is, is ZFS that good, >> that HW raid can be omitted? Does HW raid + ZFS give any gains, compared >> to only ZFS? >> >> It seems that I have to recommend them to keep their HW raid when they >> try out ZFS? Ive gotten them interested in ZFS, enough to try it out. It >> was a lengthy discussion that took much time and patience. >> > As you can see, there isn''t a straight answer because there are so many > possibilities. > > One golden rule that might help: Always let ZFS handle the top level > redundancy so it can correct errors. > > Whether you export JBODs from a SAN and build a raidz or export RAID5 > LUNs and build a ZFS mirror what matters is ZFS has redundancy. > > -- > Ian. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi; It''s all about performance when you consider H/W raid. It will put extra overhead on your OS . As ZFS is fast, I will always prefer ZFS based RAID. It will also save cost of RAID card. ===============Ashish Nabira nabira at sun.com http://sun.com Work is worship." =============== On 13-Jan-09, at 4:49 PM, Orvar Korvar wrote:> Ok, I draw the conclusion that there is no consensus on this. Nobody > really knows for sure. > > I am in the process of converting some Windows guys to ZFS, and they > think that HW raid + ZFS should be better than only ZFS. I tell them > they should ditch their HW raid, but can not really motivate why. > That is why I am asking this question. And noone at SUN does really > know, it seems. Ive asked this in another thread, no answer. > > I will tell people that ZFS + HW raid is good enough, and I will not > recommend against HW raid anymore. > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
What does this mean? Does that mean that ZFS + HW raid with raid-5 is not able to heal corrupted blocks? Then this is evidence against ZFS + HW raid, and you should only use ZFS? http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide "ZFS works well with storage based protected LUNs (RAID-5 or mirrored LUNs from intelligent storage arrays). However, ZFS cannot heal corrupted blocks that are detected by ZFS checksums." -- This message posted from opensolaris.org
I think maybe it means that if ZFS can''t ''see'' the block (the controller does that in HW RAID), it can''t checksum said block. cheers, Blake On Tue, Jan 20, 2009 at 6:34 AM, Orvar Korvar <knatte_fnatte_tjatte at yahoo.com> wrote:> What does this mean? Does that mean that ZFS + HW raid with raid-5 is not able to heal corrupted blocks? Then this is evidence against ZFS + HW raid, and you should only use ZFS? > > > http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide > > "ZFS works well with storage based protected LUNs (RAID-5 or mirrored LUNs from intelligent storage arrays). However, ZFS cannot heal corrupted blocks that are detected by ZFS checksums." > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Orvar Korvar wrote:> What does this mean? Does that mean that ZFS + HW raid with raid-5 is not able to heal corrupted blocks? Then this is evidence against ZFS + HW raid, and you should only use ZFS? > > http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide > > "ZFS works well with storage based protected LUNs (RAID-5 or mirrored LUNs from intelligent storage arrays). However, ZFS cannot heal corrupted blocks that are detected by ZFS checksums."It means that if ZFS does not manage redundancy, it cannot correct bad data. -- richard
On 1/20/2009 1:14 PM, Richard Elling wrote:> Orvar Korvar wrote: > >> What does this mean? Does that mean that ZFS + HW raid with raid-5 is not able to heal corrupted blocks? Then this is evidence against ZFS + HW raid, and you should only use ZFS? >> >> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide >> >> "ZFS works well with storage based protected LUNs (RAID-5 or mirrored LUNs from intelligent storage arrays). However, ZFS cannot heal corrupted blocks that are detected by ZFS checksums." >> > > > It means that if ZFS does not manage redundancy, it cannot correct > bad data.And there''s no rule that says you can''t take two array raid volumes, of any level, and mirror them with ZFS. (Or a few luns with RZ....)
So ZFS is not hindered at all, if you use it in conjunction with HW raid? ZFS can utilize all functionality and "heal corrupted blocks" without problems - with HW raid? -- This message posted from opensolaris.org
On Tue, 20 Jan 2009 12:13:00 PST, Orvar Korvar <knatte_fnatte_tjatte at yahoo.com> wrote:> So ZFS is not hindered at all, if you use it in conjunction > with HW raid? ZFS can utilize all functionality > and "heal corrupted blocks" without problems - with HW raid?Only if you build the zpool from a mirror where each side of the mirror is a HW RAID set in itself. zpool create mirror (RAID5 lun1) (RAID5 lun2) man zpool -- ( Kees Nuyt ) c[_]
On Tue, 20 Jan 2009, Orvar Korvar wrote:> What does this mean? Does that mean that ZFS + HW raid with raid-5 > is not able to heal corrupted blocks? Then this is evidence against > ZFS + HW raid, and you should only use ZFS?Yes and no. ZFS will detect corruptions that other filesystems won''t notice. If your HW raid passes bad data, then ZFS will detect that but it won''t be able to correct defective user data. If ZFS manages the redundancy, then ZFS can detect and correct the bad user data. With recent OpenSolaris there is also the option of setting copies=2 so that corrupted user data can be corrected as long as the ZFS pool itself continues functioning. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/