Hi All, I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection? So if I have 8 x 1.5tb drives, wouldn''t I: - mirror drive 1 and 5 - mirror drive 2 and 6 - mirror drive 3 and 7 - mirror drive 4 and 8 Then stripe 1,2,3,4 Then stripe 5,6,7,8 How does one do this with ZFS? -Jason
On Fri, 26 Mar 2010, Slack-Moehrle wrote:> Hi All, > > I am looking at ZFS and I get that they call it RAIDZ which is > similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better > for data protection?I think so--at the expense of extra disks for a given amount of available storage.> So if I have 8 x 1.5tb drives, wouldn''t I: > > - mirror drive 1 and 5 > - mirror drive 2 and 6 > - mirror drive 3 and 7 > - mirror drive 4 and 8 > > How does one do this with ZFS?Try this: zpool create dpool mirror drive1 drive5 mirror drive2 drive6 \ mirror drive3 drive7 mirror drive4 drive8 Isn''t ZFS great?! -- Rich Teer, Publisher Vinylphile Magazine www.vinylphilemag.com
And I should mention that I have a boot drive (500gb SATA) so I dont have to consider booting from the RAID, I just want to use it for storage. ----- Original Message ----- From: "Slack-Moehrle" <mailinglists at mailnewsrss.com> To: "zfs-discuss" <zfs-discuss at opensolaris.org> Sent: Friday, March 26, 2010 11:39:35 AM Subject: [zfs-discuss] RAID10 Hi All, I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection? So if I have 8 x 1.5tb drives, wouldn''t I: - mirror drive 1 and 5 - mirror drive 2 and 6 - mirror drive 3 and 7 - mirror drive 4 and 8 Then stripe 1,2,3,4 Then stripe 5,6,7,8 How does one do this with ZFS? -Jason _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Slack-Moehrle wrote:> And I should mention that I have a boot drive (500gb SATA) so I dont have to consider booting from the RAID, I just want to use it for storage. > > ----- Original Message ----- > From: "Slack-Moehrle" <mailinglists at mailnewsrss.com> > To: "zfs-discuss" <zfs-discuss at opensolaris.org> > Sent: Friday, March 26, 2010 11:39:35 AM > Subject: [zfs-discuss] RAID10 > > Hi All, > > I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection? > > So if I have 8 x 1.5tb drives, wouldn''t I: > > - mirror drive 1 and 5 > - mirror drive 2 and 6 > - mirror drive 3 and 7 > - mirror drive 4 and 8 > > Then stripe 1,2,3,4 > > Then stripe 5,6,7,8 > > How does one do this with ZFS?You don''t, because your description is insane. You mirror each pair, then "stripe" each mirror, not the drives in the mirror (not really a stripe in ZFS, but...) zpool create mypool mirror 1 5 mirror 2 6 mirror 3 7 mirror 4 8 Relpacing the numbers with the actual device names
On Fri, Mar 26, 2010 at 1:39 PM, Slack-Moehrle <mailinglists at mailnewsrss.com> wrote:> Hi All, > > I am looking at ZFS and I get that they call it RAIDZ which is similar to > RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data > protection? > > So if I have 8 x 1.5tb drives, wouldn''t I: > > - mirror drive 1 and 5 > - mirror drive 2 and 6 > - mirror drive 3 and 7 > - mirror drive 4 and 8 > > Then stripe 1,2,3,4 > > Then stripe 5,6,7,8 > > How does one do this with ZFS? > > -Jason >Just keep adding mirrored vdev''s to the pool. It isn''t exactly like a raid-10, as zfs doesn''t to a typical raid-0 stripe, per se. It is the same basic concept as raid-10 though. You would be striping across all of the mirrored sets, not just a subset. So you would do: zpool create tank mirror drive1 drive2 mirror drive3 drive4 mirror drive5 drive6 mirror drive7 drive8 See here: http://www.stringliterals.com/?p=132 <http://www.stringliterals.com/?p=132> --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100326/553eb4a7/attachment.html>
On Fri, Mar 26, 2010 at 11:39 AM, Slack-Moehrle < mailinglists at mailnewsrss.com> wrote:> I am looking at ZFS and I get that they call it RAIDZ which is similar to > RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data > protection? > > So if I have 8 x 1.5tb drives, wouldn''t I: > > - mirror drive 1 and 5 > - mirror drive 2 and 6 > - mirror drive 3 and 7 > - mirror drive 4 and 8 > > Then stripe 1,2,3,4 > > Then stripe 5,6,7,8 > > How does one do this with ZFS? > > Overly-simplified, a ZFS pool is a RAID0 stripeset across all the membervdevs, which can be either mirrors (essentially RAID10), or raidz1 (essentially RAID50), or raidz2 (essentially RAID60), or raidz3 (essentially RAID70???). A pool with a single mirror vdev is just a RAID1. A pool with a single raidz1 vdev is just a RAID5. And so on. But, as you add vdevs to a pool, it becomes a stripeset across all the vdevs. -- Freddie Cash fjwcash at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100326/3ad5845f/attachment.html>
>>So if I have 8 x 1.5tb drives, wouldn''t I:>>- mirror drive 1 and 5 >>- mirror drive 2 and 6 >>- mirror drive 3 and 7 >>- mirror drive 4 and 8>>Then stripe 1,2,3,4>>Then stripe 5,6,7,8>>How does one do this with ZFS?>So you would do: >zpool create tank mirror drive1 drive2 mirror drive3 drive4 mirror drive5 drive6 mirror drive7 drive8>See here: >http://www.stringliterals.com/?p=132So, effectively mirroring the drives, but the pool that is created is one giant pool of all of the mirrors? I looked at: http://en.wikipedia.org/wiki/Non-standard_RAID_levels#RAID-Z and they had a brief description of RAIDZ2. Can someone explain in terms of usable space RAIDZ vs RAIDZ2 vs RAIDZ3? With 8 x 1.5tb? I apologize for seeming dense, I just am confused about non-stardard raid setups, they seem tricky. -Jason
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 26.03.2010 20:04, Slack-Moehrle wrote:> > >>> So if I have 8 x 1.5tb drives, wouldn''t I: > >>> - mirror drive 1 and 5 >>> - mirror drive 2 and 6 >>> - mirror drive 3 and 7 >>> - mirror drive 4 and 8 > >>> Then stripe 1,2,3,4 > >>> Then stripe 5,6,7,8 > >>> How does one do this with ZFS? > >> So you would do: >> zpool create tank mirror drive1 drive2 mirror drive3 drive4 mirror drive5 drive6 mirror drive7 drive8 > >> See here: >> http://www.stringliterals.com/?p=132 > > So, effectively mirroring the drives, but the pool that is created is one giant pool of all of the mirrors? > > I looked at: http://en.wikipedia.org/wiki/Non-standard_RAID_levels#RAID-Z and they had a brief description of RAIDZ2. > > Can someone explain in terms of usable space RAIDZ vs RAIDZ2 vs RAIDZ3? With 8 x 1.5tb? > > I apologize for seeming dense, I just am confused about non-stardard raid setups, they seem tricky.raidz "eats" one disk. Like RAID5 raidz2 digests another one. Like RAID6 raidz3 yet another one. Like ... hmmmm... //Svein - -- - --------+-------------------+------------------------------- /"\ |Svein Skogen | svein at d80.iso100.no \ / |Solberg ?stli 9 | PGP Key: 0xE5E76831 X |2020 Skedsmokorset | svein at jernhuset.no / \ |Norway | PGP Key: 0xCE96CE13 | | svein at stillbilde.net ascii | | PGP Key: 0x58CD33B6 ribbon |System Admin | svein-listmail at stillbilde.net Campaign|stillbilde.net | PGP Key: 0x22D494A4 +-------------------+------------------------------- |msn messenger: | Mobile Phone: +47 907 03 575 |svein at jernhuset.no | RIPE handle: SS16503-RIPE - --------+-------------------+------------------------------- If you really are in a hurry, mail me at svein-mobile at stillbilde.net This mailbox goes directly to my cellphone and is checked even when I''m not in front of my computer. - ------------------------------------------------------------ Picture Gallery: https://gallery.stillbilde.net/v/svein/ - ------------------------------------------------------------ -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.12 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkutBbgACgkQSBMQn1jNM7aXPQCfSd92B8GilEiRa6LR/ltAF00X ENQAoIqlAdtCBHKiiiVbl1C9o0AZNRER =8ueU -----END PGP SIGNATURE-----
RAIDZ = RAID5, so lose 1 drive (1.5TB) RAIDZ2 = RAID6, so lose 2 drives (3TB) RAIDZ3 = RAID7(?), so lose 3 drives (4.5TB). What you lose in useable space, you gain in redundancy. -m -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Slack-Moehrle Sent: Friday, March 26, 2010 12:04 PM To: Tim Cook Cc: zfs-discuss Subject: Re: [zfs-discuss] RAID10>>So if I have 8 x 1.5tb drives, wouldn''t I:>>- mirror drive 1 and 5 >>- mirror drive 2 and 6 >>- mirror drive 3 and 7 >>- mirror drive 4 and 8>>Then stripe 1,2,3,4>>Then stripe 5,6,7,8>>How does one do this with ZFS?>So you would do: >zpool create tank mirror drive1 drive2 mirror drive3 drive4 mirror drive5 drive6 mirror drive7 drive8>See here: >http://www.stringliterals.com/?p=132So, effectively mirroring the drives, but the pool that is created is one giant pool of all of the mirrors? I looked at: http://en.wikipedia.org/wiki/Non-standard_RAID_levels#RAID-Z and they had a brief description of RAIDZ2. Can someone explain in terms of usable space RAIDZ vs RAIDZ2 vs RAIDZ3? With 8 x 1.5tb? I apologize for seeming dense, I just am confused about non-stardard raid setups, they seem tricky. -Jason _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>> Can someone explain in terms of usable space RAIDZ vs RAIDZ2 vs RAIDZ3? With 8 x 1.5tb?>> I apologize for seeming dense, I just am confused about non-stardard raid setups, they seem tricky.> raidz "eats" one disk. Like RAID5 > raidz2 digests another one. Like RAID6 > raidz3 yet another one. Like ... hmmmm...So: RAIDZ would be 8 x 1.5tb = 12tb - 1.5tb = 10.5tb RAIDZ2 would be 8 x 1.5tb = 12tb - 3.0tb = 9.0tb RAIDZ3 would be 8 x 1.5tb = 12tb - 4.5tb = 7.5tb But not really that usable space for each since the mirroring? So do you not mirror drives with RAIDZ2 or RAIDZ3 because you would have nothing for space left.... -Jason
On Fri, 26 Mar 2010, Freddie Cash wrote:> > Overly-simplified, a ZFS pool is a RAID0 stripeset across all the member vdevs, which can beExcept that ZFS does not support RAID0. I don''t know why you guys persist with these absurd claims and continue to use wrong and misleading terminology. What you guys are effectively doing is calling a mule a "horse" because it has four legs, two ears, and a tail, like a donkey. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Bob Friesenhahn wrote:> Except that ZFS does not support RAID0. I don''t know why you guys > persist with these absurd claims and continue to use wrong and > misleading terminology.What is the main difference between RAID0 and striping (what zfs really does, i guess?)
On Fri, Mar 26, 2010 at 12:25:54PM -0700, Malte Schirmacher wrote:> Bob Friesenhahn wrote: > > > Except that ZFS does not support RAID0. I don''t know why you guys > > persist with these absurd claims and continue to use wrong and > > misleading terminology. > > What is the main difference between RAID0 and striping (what zfs really > does, i guess?)There''s a difference in implementation, but, for your purposes of describing how the vdevs stripe, I''d say it''s fair enough. :) Some folks are just a little sensitive about ZFS being compared to standard RAID is all, so what''s your P''s and Q''s around here! ;) Ray
On Fri, Mar 26, 2010 at 12:21 PM, Bob Friesenhahn < bfriesen at simple.dallas.tx.us> wrote:> On Fri, 26 Mar 2010, Freddie Cash wrote: > >> >> Overly-simplified, a ZFS pool is a RAID0 stripeset across all the member >> vdevs, which can be >> > > Except that ZFS does not support RAID0.Wow, what part of "overly simplified" did you not read, see, understand, or parse? You even quoted it. I don''t know why you guys persist with these absurd claims and continue to> use wrong and misleading terminology. >So, "mister I''m so much better than everyone because I know that ZFS doesn''t use RAID0 but don''t provide any actual useful info": How would you describe how a ZFS pool works for striping data across multiple vdevs, in such a way that someone coming from a RAID background can understand, without using fancy-shmancy terms that no one else has ever heard? (Especially considering how confused the OP was as to how even a RAID10 array works.) Where I come from, you start with what the person knows (RAID terminology), find ways to relate that to the new knowledge domain (basically a RAID0 stripeset), and then later build on that to explain all the fancy-shmancy terminology and nitty-gritty of how it works. We didn''t all pop into the work full of all the knowledge of everything. What you guys are effectively doing is calling a mule a "horse" because it> has four legs, two ears, and a tail, like a donkey. >For someone who''s only ever seen, dealt with, and used horses, then (overly simplified), a mule is like a horse. Just as it is like a donkey. From there, you can go on to explain how a mule actually came to be, and what makes it different from a horse and a donkey. And what makes it better than either. -- Freddie Cash fjwcash at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100326/3e742750/attachment.html>
On Fri, March 26, 2010 14:21, Bob Friesenhahn wrote:> On Fri, 26 Mar 2010, Freddie Cash wrote: >> >> Overly-simplified, a ZFS pool is a RAID0 stripeset across all the member >> vdevs, which can be > > Except that ZFS does not support RAID0. I don''t know why you guys > persist with these absurd claims and continue to use wrong and > misleading terminology.They''re attempting to communicate with the OP, who made it pretty clear that he was comfortable with traditional RAID terms, and trying to understand ZFS.> What you guys are effectively doing is calling a mule a "horse" > because it has four legs, two ears, and a tail, like a donkey.They''re short-circuiting that discussion, and we can have it later if necessary. The differences you''re emphasizing are important for implementation, and performance analysis, and even for designing the system at some levels, but they''re not important to the initial understanding of the system. The question was essentially "Wait, I don''t see RAID 10 here, and that''s what I like. How do I do that?" I think the answer was responsive and not misleading enough to be dangerous; the differences can be explicated later. YMMV :-) -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
On Fri, March 26, 2010 14:25, Malte Schirmacher wrote:> Bob Friesenhahn wrote: > >> Except that ZFS does not support RAID0. I don''t know why you guys >> persist with these absurd claims and continue to use wrong and >> misleading terminology. > > What is the main difference between RAID0 and striping (what zfs really > does, i guess?)RAID creates fixed, absolute, patterns of spreading blocks, bytes, and bits around the various disks; ZFS does not, it makes on-the-fly decisions about where things should go at some levels. In RAID1, a block will go the same physical place on each drive; in a ZFS mirror it won''t, it''ll just go *somewhere* on each drive. In the end, RAID produces a block device that you then run a filesystem on, whereas ZFS includes the filesystem (and other things; including block devices you can run other filesystems on). -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
It depends a bit on how you set up the drives really. You could make one raidz vdev of 8 drives, losing one of them for parity, or you could make two raidz vdevs of 4 drives each and lose two drives for parity (one for each vdev). You could also do one raidz2 vdev of 8 drives and lose two drives for parity, or two raidz2 vdevs of 4 drives each and lose four drives for parity (2 for each raidz2 vdev). That would give you a bit better redundancy than using 4 mirrors while giving you the same available storage space. The list goes on and on. There are a lot of different configurations you could use with 8 drives, but keep in mind once you add a vdev to your pool, you can''t remove it. Personally, I would not choose to create one vdev of 8 disks, but that''s just me. It is important to be aware that when and if you want to replace the 1.5TB disks with something bigger, you need to replace ALL the disks in the vdev to gain the extra space. So, if you wanted to go from 1.5TB to 2TB disks down the road, and you set up one raidz of 8 drives, you need to replace all 8 drives before you gain the additional space. If you do two raidz vdevs of 4 drives each, you need to replace 4 drives to gain additional space. If you use mirrors, you need to replace 2 drives. Or, you can add a new vdev of 2, 4, 8, or however many disks you want if you have the physical space to do so. I believe you can mix and match mirror vdevs and raidz vdevs within a zpool, but I don''t think it''s recommended to do so. The ZFS best practices guide has a lot of good information in it if you have not read it yet (google). You might have less usable drive space using mirrors, but you will gain a bit of performance, and it''s a bit easier to expand your zpool when the time comes. A raidz (1,2,3) can give you more usable space, and can give you better or worse redundancy depending on how you set it up. There is a lot to consider. I hope I didn''t cloud things up for you any further or misinform you on something (I''m a newb too, so don''t take my word alone on anything). Hell, if you wanted to, you could also do one 8-way mirror that would give you an ignorant amount of redundancy at the cost of 7 drives worth of usable space. It all boils down to personal choice. You have to determine how much usable space, redundancy, performance, and ease of replacing drives mean to you and go from there. ZFS will do pretty much any configuration to suit your needs. eric -- This message posted from opensolaris.org
On Fri, 26 Mar 2010, Malte Schirmacher wrote:> Bob Friesenhahn wrote: > >> Except that ZFS does not support RAID0. I don''t know why you guys >> persist with these absurd claims and continue to use wrong and >> misleading terminology. > > What is the main difference between RAID0 and striping (what zfs really > does, i guess?)Zfs only stripes within raidzN vdevs, and even then at the zfs record level and not using a "RAID0" (fixed mapping on the LUN) approach. "RAID0" and "striping" are similar concepts. When one stripes across an array of disks, one breaks up the written block (record), and writes parts of it across all of the disks in the stripe. This is usually done to increase sequential read/write performance but may also be used to assist with error recovery (which zfs does take advantage of). Zfs only writes whole records (e.g. 128K) to a vdev so that it does not "stripe" across vdevs. Within a vdev, it may stripe. The difference is pretty huge when one considers that zfs is able to support vdevs of different sizes and topologies, as well as ones added much more recently than when the pool was created. RAID0 and striping can''t do that. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Fri, 26 Mar 2010, David Dyer-Bennet wrote:> > The question was essentially "Wait, I don''t see RAID 10 here, and that''s > what I like. How do I do that?" I think the answer was responsive and > not misleading enough to be dangerous; the differences can be explicated > later.Most of us choose a pool design and then copy all of our data to it. If one does not understand how the pool works, then a poor design may be selected, which can be difficult to extricate from later. That is why it is important to know that zfs writes full records to each vdev and does not "stripe" the blocks across vdevs as was suggested. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Fri, 26 Mar 2010, Freddie Cash wrote:> On Fri, Mar 26, 2010 at 12:21 PM, Bob Friesenhahn <bfriesen at simple.dallas.tx.us> wrote: > On Fri, 26 Mar 2010, Freddie Cash wrote: > > Overly-simplified, a ZFS pool is a RAID0 stripeset across all the member > vdevs, which can be > > Except that ZFS does not support RAID0. > > Wow, what part of "overly simplified" did you not read, see, understand, or parse? ?You even quoted > it.Sorry to pick on your email in particular. Everyone here should consider it to be their personal duty to correct such statements. The distinctions may not seem important but they are important to understand since they can be quite important to pool performance. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
OK, so I made progress today. FreeBSD see''s all of my drives, ZFS is acting correct. Now for me confusion. RAIDz3 # zpool create datastore raidz3 da0 da1 da2 da3 da4 da5 da6 da7 Gives: ''raidz3'' no such GEOM providor # I am looking at the best practices guide and I am confused about adding a hot spare. Wont that happen with the above command or do I really just zpool create datastore raidz3 da0 da1 da2 da3 da4 da5 and then issue the hotspare command twice for da6 and da7? -Jason ----- Original Message ----- From: "Slack-Moehrle" <mailinglists at mailnewsrss.com> To: zfs-discuss at opensolaris.org Sent: Friday, March 26, 2010 12:13:58 PM Subject: Re: [zfs-discuss] RAID10>> Can someone explain in terms of usable space RAIDZ vs RAIDZ2 vs RAIDZ3? With 8 x 1.5tb?>> I apologize for seeming dense, I just am confused about non-stardard raid setups, they seem tricky.> raidz "eats" one disk. Like RAID5 > raidz2 digests another one. Like RAID6 > raidz3 yet another one. Like ... hmmmm...So: RAIDZ would be 8 x 1.5tb = 12tb - 1.5tb = 10.5tb RAIDZ2 would be 8 x 1.5tb = 12tb - 3.0tb = 9.0tb RAIDZ3 would be 8 x 1.5tb = 12tb - 4.5tb = 7.5tb But not really that usable space for each since the mirroring? So do you not mirror drives with RAIDZ2 or RAIDZ3 because you would have nothing for space left.... -Jason _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Mar 26, 2010 at 6:29 PM, Slack-Moehrle <mailinglists at mailnewsrss.com> wrote:> > OK, so I made progress today. FreeBSD see''s all of my drives, ZFS is acting > correct. > > Now for me confusion. > > RAIDz3 > > # zpool create datastore raidz3 da0 da1 da2 da3 da4 da5 da6 da7 > Gives: ''raidz3'' no such GEOM providor > > # I am looking at the best practices guide and I am confused about adding a > hot spare. Wont that happen with the above command or do I really just zpool > create datastore raidz3 da0 da1 da2 da3 da4 da5 and then issue the hotspare > command twice for da6 and da7? > > -Jason > > ----- Original Message ----- > From: "Slack-Moehrle" <mailinglists at mailnewsrss.com> > To: zfs-discuss at opensolaris.org > Sent: Friday, March 26, 2010 12:13:58 PM > Subject: Re: [zfs-discuss] RAID10 > > > > >> Can someone explain in terms of usable space RAIDZ vs RAIDZ2 vs RAIDZ3? > With 8 x 1.5tb? > > >> I apologize for seeming dense, I just am confused about non-stardard > raid setups, they seem tricky. > > > raidz "eats" one disk. Like RAID5 > > raidz2 digests another one. Like RAID6 > > raidz3 yet another one. Like ... hmmmm... > > So: > > RAIDZ would be 8 x 1.5tb = 12tb - 1.5tb = 10.5tb > > RAIDZ2 would be 8 x 1.5tb = 12tb - 3.0tb = 9.0tb > > RAIDZ3 would be 8 x 1.5tb = 12tb - 4.5tb = 7.5tb > > But not really that usable space for each since the mirroring? > > So do you not mirror drives with RAIDZ2 or RAIDZ3 because you would have > nothing for space left.... > > -Jason > >Triple parity did not get added until version 17. FreeBSD cannot do raidz3. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100326/12223411/attachment.html>
On Mar 26, 2010, at 23:37, David Dyer-Bennet <dd-b at dd-b.net> wrote:> > On Fri, March 26, 2010 14:25, Malte Schirmacher wrote: >> Bob Friesenhahn wrote: >> >>> Except that ZFS does not support RAID0. I don''t know why you guys >>> persist with these absurd claims and continue to use wrong and >>> misleading terminology. >> >> What is the main difference between RAID0 and striping (what zfs >> really >> does, i guess?) > > RAID creates fixed, absolute, patterns of spreading blocks, bytes, and > bits around the various disks; ZFS does not, it makes on-the-fly > decisions > about where things should go at some levels. In RAID1, a block will > go > the same physical place on each drive; in a ZFS mirror it won''t, it''ll > just go *somewhere* on each drive.This is not correct. In ZFS mirror a block will go to the same offset within data area on both submirrors. But if you set up your mirrored slices starting at different offsets you can arrange for blocks on submirrors to have different physical offsets ;-)> > In the end, RAID produces a block device that you then run a > filesystem > on, whereas ZFS includes the filesystem (and other things; including > block > devices you can run other filesystems on). > -- > David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ > Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ > Photos: http://dd-b.net/photography/gallery/ > Dragaera: http://dragaera.info > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Mar 26, 2010 at 4:29 PM, Slack-Moehrle <mailinglists at mailnewsrss.com> wrote:> OK, so I made progress today. FreeBSD see''s all of my drives, ZFS is acting > correct. > > Now for me confusion. > > RAIDz3 > > # zpool create datastore raidz3 da0 da1 da2 da3 da4 da5 da6 da7 > Gives: ''raidz3'' no such GEOM providor > >FreeBSD 7.3 includes ZFSv13. FreeBSD 8.0 includes ZFSv13. FreeBSD 8-STABLE currently includes ZFSv14 with work on-going to get ZFSv15 in. FreeBSD 8.1 (released this summer) will, hopefully, include ZFSv15. raidz3 support is not available in any of the above versions of ZFS. Thus, the error message. You are limited to mirror, raidz1, and raidz2 vdevs in FreeBSD (for data storage; there''s also the log, cache, and spare vdev types available). Hopefully, ZFSv20-something will be included when FreeBSD 9.0 is released. # I am looking at the best practices guide and I am confused about adding a> hot spare. Wont that happen with the above command or do I really just zpool > create datastore raidz3 da0 da1 da2 da3 da4 da5 and then issue the hotspare > command twice for da6 and da7? >All in one command: zpool create datastore raidz2 da0 da1 da2 da3 da4 da5 da6 spare da7 Or, as two separate commands: zpool create datastore raidz2 da0 da1 da2 da3 da4 da5 da6 zpool add datastore spare da7 One thing you may want to do, is to label your disks using glabel(8). That way, if you re-arrange the drives, or swap controllers, or boot with a missing drive, or add new drives, everything will continue to work correctly. While ZFS does it''s own labelling of the drives, I''ve found it to be quite fragile, in the sense that it requires a "zpool export" and "zpool import" process, usually with a -f on the import. (At least on FreeBSD.) In comparison, using glabel eliminates all those issues, and happens below the ZFS layer, presenting ZFS an always-consistent view of the hardware. glabel label disk01 da0 glabel label disk02 da1 glabel label disk03 da2 glabel label disk04 da3 glabel label disk05 da4 glabel label disk06 da5 glabel label disk07 da6 glabel label disk08 da7 zpool create datastore raidz2 label/disk01 label/disk02 label/disk03 label/disk04 label/disk05 label/disk06 label/disk07 zpool add databases spare label/disk08 Thus, no matter what the underlying device node is (da0 could become ada6 tomorrow if you switch to an AHCI controller, for example) the kernel will map the drives correctly, and ZFS only have to worry about using "label/disk01". -- Freddie Cash fjwcash at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100415/8e23374d/attachment.html>