Hi here, I''ve a storage with 12 SCSI disks which be configured raidz . I tried to take out 2 SCSI disks when data are writing in the raidz pool.After i token out disks in couple of mins the raidz pool crash !!!! I can''t find any docs about raidz fault-tolerant . Anyone konws ???? Thanks. This message posted from opensolaris.org
On 5/26/06, axa <axanett at yahoo.com.tw> wrote:> Hi here, > > I''ve a storage with 12 SCSI disks which be configured raidz . > > I tried to take out 2 SCSI disks when data are writing in the raidz pool.After i token out disks in couple of mins the raidz pool crash !!!! I can''t find any docs about raidz fault-tolerant . Anyone konws ???? >raidz is like raid 5, so you can survive the death of one disk, not 2. I would recomend you configure the 12 disks into, 2 raidz groups, then you can survive the death of one drive from each group. This is what i did on my system -bash-3.00$ /usr/sbin/zpool status pool: data state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz ONLINE 0 0 0 c1t11d0s0 ONLINE 0 0 0 c1t12d0s0 ONLINE 0 0 0 c1t13d0s0 ONLINE 0 0 0 c1t14d0s0 ONLINE 0 0 0 raidz ONLINE 0 0 0 c1t3d0s0 ONLINE 0 0 0 c1t4d0s0 ONLINE 0 0 0 c1t8d0s0 ONLINE 0 0 0 c1t9d0s0 ONLINE 0 0 0 c1t10d0s0 ONLINE 0 0 0 errors: No known data errors James Dickens uadmin.blogspot.com> Thanks. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
RAID-Z is single-fault tolerant. If if you take out two disks, then you no longer have the required redundancy to maintain your data. Build 42 should contain double-parity RAID-Z, which will allow you to sustain two simulataneous disk failures without dataloss. For an overview of ZFS fault tolerance, you should check out the ZFS administration guide: http://docs.sun.com/app/docs/doc/817-2271 For a description of how RAID-Z works internally, you should look at Jeff''s blog: http://blogs.sun.com/roller/page/bonwick?entry=raid_z - Eric On Fri, May 26, 2006 at 10:08:45AM -0700, axa wrote:> Hi here, > > I''ve a storage with 12 SCSI disks which be configured raidz . > > I tried to take out 2 SCSI disks when data are writing in the raidz pool.After i token out disks in couple of mins the raidz pool crash !!!! I can''t find any docs about raidz fault-tolerant . Anyone konws ???? > > Thanks. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
>raidz is like raid 5, so you can survive the death of one disk, not 2. >I would recomend you configure the 12 disks into, 2 raidz groups, >then you can survive the death of one drive from each group. This is >what i did on my systemHi James , Thank you very much. ;-) I''ll configure 2 raidz groups in my pool tomorrow . BTW, I''m not sure that multiple raidz groups might sacrifice performance????? Thanks. This message posted from opensolaris.org
Richard Elling
2006-May-26 18:09 UTC
[zfs-discuss] Re: How''s zfs RAIDZ fualt-tolerant ???
On Fri, 2006-05-26 at 10:51 -0700, axa wrote:> >raidz is like raid 5, so you can survive the death of one disk, not 2. > >I would recomend you configure the 12 disks into, 2 raidz groups, > >then you can survive the death of one drive from each group. This is > >what i did on my system > > Hi James , Thank you very much. ;-) > > I''ll configure 2 raidz groups in my pool tomorrow . BTW, I''m not sure that multiple raidz groups might sacrifice performance?????The prevailing wind says that RAID-Z (or RAID-5 even) using lots of disks does not perform as well as a smaller number of disks (eg. 5-7). The effect depends entirely on the workload, so it is difficult to claim all cases are better. I''ll predict that it is likely you will get better performance using 2x6-disk RAID-Z than 1x12-disk RAID-Z. s/RAID-Z/RAID-5/g s/RAID-5/RAID-6/g -- richard
> RAID-Z is single-fault tolerant. If if you take out two disks, > then you > no longer have the required redundancy to maintain your data. > Build 42 > should contain double-parity RAID-Z, which will allow you to > sustain two > simulataneous disk failures without dataloss.I''m not sure if this has been mentioned elsewhere (I didn''t see it..) but will this double parity be backported into Solaris 10 in time for making the U2 release? This is a sorely needed piece of functionality for my deployment (and I''m sure many others.) Thanks, David
It will be backported to an S10 Update, but it won''t make U2. Expect in U3. - Eric On Fri, May 26, 2006 at 09:32:42AM -1000, David J. Orman wrote:> > RAID-Z is single-fault tolerant. If if you take out two disks, > > then you > > no longer have the required redundancy to maintain your data. > > Build 42 > > should contain double-parity RAID-Z, which will allow you to > > sustain two > > simulataneous disk failures without dataloss. > > I''m not sure if this has been mentioned elsewhere (I didn''t see it..) but will this double parity be backported into Solaris 10 in time for making the U2 release? This is a sorely needed piece of functionality for my deployment (and I''m sure many others.) > > Thanks, > David-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
On Fri, May 26, 2006 at 10:33:34AM -0700, Eric Schrock wrote:> RAID-Z is single-fault tolerant. If if you take out two disks, then you > no longer have the required redundancy to maintain your data. Build 42 > should contain double-parity RAID-Z, which will allow you to sustain two > simulataneous disk failures without dataloss.Eric, is raidz double parity optional or mandatory? grant.
On Sat, May 27, 2006 at 08:29:05AM +1000, grant beattie wrote:> is raidz double parity optional or mandatory?Backwards compatibility dictates that it will be optional.
On Sat, May 27, 2006 at 08:29:05AM +1000, grant beattie wrote:> On Fri, May 26, 2006 at 10:33:34AM -0700, Eric Schrock wrote: > > > RAID-Z is single-fault tolerant. If if you take out two disks, then you > > no longer have the required redundancy to maintain your data. Build 42 > > should contain double-parity RAID-Z, which will allow you to sustain two > > simulataneous disk failures without dataloss. > > Eric, > > is raidz double parity optional or mandatory?optional. There will be a new vdev type, ''raidz2'', which will be double parity. Existing raid-z vdevs will become ''raidz1'', and ''raidz'' will be kept as an alias for ''raidz1'' for backwards compatibility. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Eric Schrock wrote:> On Sat, May 27, 2006 at 08:29:05AM +1000, grant beattie wrote: >> On Fri, May 26, 2006 at 10:33:34AM -0700, Eric Schrock wrote: >> >>> RAID-Z is single-fault tolerant. If if you take out two disks, then you >>> no longer have the required redundancy to maintain your data. Build 42 >>> should contain double-parity RAID-Z, which will allow you to sustain two >>> simulataneous disk failures without dataloss. >> Eric, >> >> is raidz double parity optional or mandatory? > > optional. There will be a new vdev type, ''raidz2'', which will be > double parity. Existing raid-z vdevs will become ''raidz1'', and ''raidz'' > will be kept as an alias for ''raidz1'' for backwards compatibility.This implies that the differences on disk are sufficient that the zpool upgrade stuff wouldn''t cut it. Right ? That is there was no way to turn an existing raidz1 into a raidz2. Is there a new version for this or is that not needed since it is at the vdev layer ? Just curious (mainly because I''m currently using version "3" for zfs-crypto :-)). I assume the on disk format document will be updated too. -- Darren J Moffat
On Tue, May 30, 2006 at 11:18:33AM +0100, Darren J Moffat wrote:> > This implies that the differences on disk are sufficient that the zpool > upgrade stuff wouldn''t cut it. Right ? That is there was no way to turn > an existing raidz1 into a raidz2.Yes, that is correct.> Is there a new version for this or is that not needed since it is at the > vdev layer ?It is a new version, since ''nparity'' is stored as part of the label, and we must prevent old versions from trying to naively open it as a normal "raidz" vdev.> Just curious (mainly because I''m currently using version "3" for > zfs-crypto :-)).This week we''ll putback version 3, which will include double parity RAID-Z, hot spares, and improved RAID-Z accounting.> I assume the on disk format document will be updated too.Yes. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock