On Wed, Nov 24, 2004 at 10:54:07AM +0100, Eirik ?verby wrote: +> to the best of my ability I have been investigating the 'real' +> requirements of a raid-3 array, and cannot see that the following text +> from graid3(8) cannot possibly be correct - and if it is, then the +> implementation must be wrong or incomplete (emphasis added): +> +> label Create a RAID3 device. The last given component will contain +> parity data, all the rest - regular data. ***Number of +> compo- +> nents has to be equal to 3, 5, 9, 17, etc. (2^n + 1).*** +> +> I might be wrong, but I cannot see how a raid-3 array should require +> (2^n + 1) drives - I am fairly certain I have seen raid-3 arrays +> consisting of four drives, for example. This is also what I had hoped to +> accomplish. This requirement is because we want sectorsize to be power of 2 (UFS needs it). In RAID3 we want to send every I/O request to all components at once, that's why we need sector size to be N*512, where N is a power of 2 value AND because graid3 uses one parity component we need N+1 providers. -- Pawel Jakub Dawidek http://www.FreeBSD.org pjd@FreeBSD.org http://garage.freebsd.pl FreeBSD committer Am I Evil? Yes, I Am! -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 187 bytes Desc: not available Url : http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20041124/3f94b717/attachment.bin
Hi, to the best of my ability I have been investigating the 'real' requirements of a raid-3 array, and cannot see that the following text from graid3(8) cannot possibly be correct - and if it is, then the implementation must be wrong or incomplete (emphasis added): label Create a RAID3 device. The last given component will contain parity data, all the rest - regular data. ***Number of compo- nents has to be equal to 3, 5, 9, 17, etc. (2^n + 1).*** I might be wrong, but I cannot see how a raid-3 array should require (2^n + 1) drives - I am fairly certain I have seen raid-3 arrays consisting of four drives, for example. This is also what I had hoped to accomplish. Anyone care to shed a light on this? I'd prefer to use graid3 (or 5, if there was one) instead of gvinum.. Thanks, /Eirik
> What's unusable about it? I've 4 250GB ATA drives, desiring capacity + > redundancy, but don't care about speed, much like you, and gvinum raid 5 > has suited me just fine this past few weeks. Eats a lot of system cpu when > there is heavy IO to the R5, but I've booted up with a drive unplugged and > it worked fine in degraded mode, so I'm content...Hmm... Maybe I got lucky/had an empty filesystem/hallucinated last time, but in any event when I try pulling the power on a drive now I get an error about a block not being found... Less than reassuring: drive 1: /dev/gvinum/big: CAN'T CHECK FILE SYSTEM /dev/gvinum/big: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY drive 2: /dev/gvinum/big: CANNOT READ BLK: 1401158656 /dev/gvinum/big: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY drive 3: /dev/gvinum/big: CANNOT READ BLK: 1401158656 /dev/gvinum/big: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY drive 4: Cannot find file system superblock /dev/gvinum/big: CANNOT READ BLK: 1401158656 /dev/gvinum/big: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY /dev/gvinum/big: CANNOT WRITE BLK: 12000 /dev/gvinum/big: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY --- THE FOLLOWING FILE SYSTEM HAD AN UNEXPECTED INCONSISTENCY: ufs: /dev/gvinum/big (/home) Automatic file system check failed; help! Pulling power on the RAID0/RAID1 arrays I have does what I expect it to do... Anyone have any idea what's going on here? Cheers, Brian Szymanski ski@indymedia.org