Question #1: I''ve seen 5-6 disk zpools are the most recommended setup. In traditional RAID terms, I would like to do RAID5 + hot spare (13 disks usable) out of the 15 disks (like raidz2 I suppose). What would make the most sense to setup 15 disks with ~ 13 disks of usable space? This is for a home fileserver, I do not need HA/hotplugging/etc. so I can tolerate a failure and replace it with plenty of time. It''s not mission critical. Question #2: Same question, but 10 disks, and I''d sacrifice one for parity then. Not two. so ~9 disks usable roughly (like raidz) Thanks. This message posted from opensolaris.org
On Fri, Aug 22, 2008 at 00:15, mike <opensolaris at mike2k.com> wrote:> Question #1: > > I''ve seen 5-6 disk zpools are the most recommended setup. > > In traditional RAID terms, I would like to do RAID5 + hot spare (13 disks usable) out of the 15 disks (like raidz2 I suppose). What would make the most sense to setup 15 disks with ~ 13 disks of usable space? This is for a home fileserver, I do not need HA/hotplugging/etc. so I can tolerate a failure and replace it with plenty of time. It''s not mission critical.I''d do two raidz groups of seven disks each and a hot spare; it gives you 12 disks worth of capacity, and reasonable redundancy. Digging up room for another disk and doing 8-disk raidz2 groups would be better for professional things, but for home usage raidz will probably suffice. Make backups of the important things, on different storage.> Question #2: > > Same question, but 10 disks, and I''d sacrifice one for parity then. Not two. so ~9 disks usable roughly (like raidz)Well, if you''ll only give up one disk, you have only one option: a single raidz group. Groups that wide aren''t recommended, so you might consider buying a smaller number of larger-capacity disks (1.5TB disks are about to be out) and making a narrower group. Less disks means less opportunities for complete failure. Will
i could probably do 16 disks and maybe do a raidz on both for 14 disks usable combined... that''s probably as redundant as i''d need, i think. can you combine two zpools together? or will i have two separate "partitions" (i.e. i''ll have "tank" for example and "tank2" instead of making one single large "tank"?) This message posted from opensolaris.org
>>>>> "m" == mike <opensolaris at mike2k.com> writes:m> can you combine two zpools together? no. You can have many vdevs in one pool. for example you can have a mirror vdev and a raidz2 vdev in the same pool. You can also destroy pool B, and add its (now empty) devices to pool A. but once two separate pools are created you can''t later smush them together. but...since you bring it up, that is exactly what I would do with the 16 disks: make two pools. I''d make one of the pools compressed, make backups onto it with zfs send/recv, and leave it exported most of the time. Every week or so I''d spin up the disks, import the pool, write another incremental backup onto it, scrub it, export it, and spin the disks back down. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080822/419f7023/attachment.bin>
see, originally when i read about zfs it said it could expand to petabytes or something. but really, that''s not as a single "filesystem" ? that could only be accomplished through combinations of pools? i don''t really want to have to even think about managing two separate "partitions" - i''d like to group everything together into one large 13tb instance (or however big it winds up being) - is that not possible? This message posted from opensolaris.org
> see, originally when i read about zfs it said it could expand to petabytes or something. but really, that''s not as a single "filesystem" ? that could only be accomplished through combinations of pools? > > i don''t really want to have to even think about managing two separate "partitions" - i''d like to group everything together into one large 13tb instance (or however big it winds up being) - is that not possible?You could try zpool create my_disk \ raidz disk1 disk2 disk3 disk4 disk5 \ raidz disk6 disk7 disk8 disk9 disk10 \ raidz disk11 disk12 disk13 disk14 disk15 \ spare disk16 This will give you 12 (metric) TB which is almost 11 TB as seen by the computer. -- regards Claus When lenity and cruelty play for a kingdom, the gentlest gamester is the soonest winner. Shakespeare
On Fri, Aug 22, 2008 at 8:11 AM, mike <opensolaris at mike2k.com> wrote:> see, originally when i read about zfs it said it could expand to petabytes or something. but really, that''s not as a single "filesystem" ? that could only be accomplished through combinations of pools? > > i don''t really want to have to even think about managing two separate "partitions" - i''d like to group everything together into one large 13tb instance (or however big it winds up being) - is that not possible?Sure it''s possible. That''s how it works. Say with 16 disks: zpool create tank raidz1 disk1 disk2 disk3 disk4 disk5 \ raidz1 disk6 disk7 disk8 disk9 disk10 \ raidz1 disk11 disk12 disk13 disk14 disk15 \ spare disk16 Gives you a single pool containing 3 raidz vdevs (each 4 data + 1 parity) and a hot spare. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
Hello mike, Friday, August 22, 2008, 8:11:36 AM, you wrote: m> see, originally when i read about zfs it said it could expand to m> petabytes or something. but really, that''s not as a single m> "filesystem" ? that could only be accomplished through combinations of pools? m> i don''t really want to have to even think about managing two m> separate "partitions" - i''d like to group everything together into m> one large 13tb instance (or however big it winds up being) - is that not possible? you can do something like: zpool create test raidz2 d1 d2 d3 d4 d5 raidz2 d6 d7 d8 d9 d10 \ raidz2 d11 d12 d13 d14 d15 zfs create test/fs1 zfs create test/fs2 zfs create test/fs3 That way you have created a pool which is made of 3 raid-z2 groups and you have then created additional 3 file systems withint the pools, each of them can use all space in a pool by default. Later on you can do: zpool add test raidz2 d16 d17 d18 d19 d20 -- Best regards, Robert Milkowski mailto:milek at task.gda.pl http://milek.blogspot.com
likewise i could also do something like zpool create tank raidz1 disk1 disk2 disk3 disk4 disk5 disk6 disk7 \ raidz1 disk8 disk9 disk10 disk11 disk12 disk13 disk14 disk15 and i''d have a 7 disk raidz1 and an 8 disk raidz1... and i''d have 15 disks still broken up into not-too-horrible pool sizes and a single filesystem to use, if i understood everything. either pool could suffer one physical failure. i run "fmadm faulty" every 10 minutes to notify me instantly of any detected failures/events (thanks Richard for the pointers :)) This message posted from opensolaris.org
mike wrote:> likewise i could also do something like > > zpool create tank raidz1 disk1 disk2 disk3 disk4 disk5 disk6 disk7 \ > raidz1 disk8 disk9 disk10 disk11 disk12 disk13 disk14 disk15 > > and i''d have a 7 disk raidz1 and an 8 disk raidz1... and i''d have 15 disks still broken up into not-too-horrible pool sizes and a single filesystem to use, if i understood everything. > > either pool could suffer one physical failure. i run "fmadm faulty" every 10 minutes to notify me instantly of any detected failures/events (thanks Richard for the pointers :))^^^^ vdev not pool. There is only one pool and thus one filesystem namespace in that configuration. -- Darren J Moffat
Hey Mike, First of all, I''d strongly suggest going for raidz2 instead of raidz. Dual parity protection is something I''d strongly recommended over single parity protection. You also don''t mention your boot pool. You can''t boot from a raid pool, so you need to put one disk aside for booting from. You may have included that already though, so I''ll assume you''ve got 15 data disks. If you''re not worried about performance you can probably create a single 15 disk raidz2 pool. In terms of data protection that''s better than a 14 disk raidz pool with a hot spare, and gives you plenty of capacity. As far as I''m aware, there''s nothing stopping you creating a raidz or raidz2 stripe of 15 disks, it''s just that you get better performance with single digit stripe sizes. Failing that, I''d suggest going for two raid-z2 stripes of 7 disks each, with one drive left over as a hot spare. Ross This message posted from opensolaris.org
14+2 or 7+1 On 8/22/08, Miles Nordin <carton at ivy.net> wrote:>>>>>> "m" == mike <opensolaris at mike2k.com> writes: > > m> can you combine two zpools together? > > no. You can have many vdevs in one pool. for example you can have a > mirror vdev and a raidz2 vdev in the same pool. You can also destroy > pool B, and add its (now empty) devices to pool A. but once two > separate pools are created you can''t later smush them together. > > but...since you bring it up, that is exactly what I would do with the > 16 disks: make two pools. I''d make one of the pools compressed, make > backups onto it with zfs send/recv, and leave it exported most of the > time. Every week or so I''d spin up the disks, import the pool, write > another incremental backup onto it, scrub it, export it, and spin the > disks back down. >
I hear everyone''s concerns about multiple parity disks. Are there any benchmarks or numbers showing the performance difference using a 15 disk raidz2 zpool? I am fine sacrificing some performance but obviously don''t want to make the machine crawl. It sounds like I could go with 15 disks evenly and have to sacrifice 3, but I would have 1 parity disk on each 7 disk raidz1 zpool and a hot spare to cover a failure on either pool: zpool create tank \ raidz disk1 disk2 disk3 disk4 disk5 disk6 disk7 \ raidz disk8 disk9 disk10 disk11 disk12 disk13 disk14 \ spare disk15 That''s pretty much dual parity/dual failure for both pools assuming I swap out the dead drive pretty quickly. Yeah? And terminology-wise, one or more zpools create zdevs right? Oh, and does raidz2 provide more performance than a raidz1 as it is kind of like dual parity and can split up the parity traffic over two devices? Thanks :) This message posted from opensolaris.org
> Are there any benchmarks or numbers showing the performance difference using a 15 disk raidz2 zpool? I am fine sacrificing some performance but obviously don''t want to make the machine crawl. > > It sounds like I could go with 15 disks evenly and have to sacrifice 3, but I would have 1 parity disk on each 7 disk raidz1 zpool and a hot spare to cover a failure on either pool: > > zpool create tank \ > raidz disk1 disk2 disk3 disk4 disk5 disk6 disk7 \ > raidz disk8 disk9 disk10 disk11 disk12 disk13 disk14 \ > spare disk15If space is not your first priority I''d go for a zpool with two raidz2-set each with 8 disk. Then you still have 12 disks at your disposal. -- regards Claus When lenity and cruelty play for a kingdom, the gentlest gamester is the soonest winner. Shakespeare
On Thu, 2008-08-21 at 21:15 -0700, mike wrote:> I''ve seen 5-6 disk zpools are the most recommended setup.This is incorrect. Much larger zpools built out of striped redundant vdevs (mirror, raidz1, raidz2) are recommended and also work well. raidz1 or raidz2 vdevs of more than a single-digit number of drives are not recommended. so, for instance, the following is an appropriate use of 12 drives in two raidz2 sets of 6 disks, with 8 disks worth of raw space available: zpool create mypool raidz2 disk0 disk1 disk2 disk3 disk4 disk5 zpool add mypool raidz2 disk6 disk7 disk8 disk9 disk10 disk11> In traditional RAID terms, I would like to do RAID5 + hot spare (13 > disks usable) out of the 15 disks (like raidz2 I suppose). What would > make the most sense to setup 15 disks with ~ 13 disks of usable space?Enable compression, and set up multiple raidz2 groups. Depending on what you''re storing, you may get back more than you lose to parity.> This is for a home fileserver, I do not need HA/hotplugging/etc. so I > can tolerate a failure and replace it with plenty of time. It''s not > mission critical.That''s a lot of spindles for a home fileserver. I''d be inclined to go with a smaller number of larger disks in mirror pairs, allowing me to buy larger disks in pairs as they come on the market to increase capacity.> Same question, but 10 disks, and I''d sacrifice one for parity then. > Not two. so ~9 disks usable roughly (like raidz)zpool create mypool raidz1 disk0 disk1 disk2 disk3 disk4 zpool add mypool raidz1 disk5 disk6 disk7 disk8 disk9 8 disks raw capacity, can survive the loss of any one disk or the loss of two disks in different raidz groups.
Oh sorry - for boot I don''t care if it''s redundant or anything. Worst case the drive fails, I replace it and reinstall, and just re-mount the ZFS stuff. If I have the space in the case and the ports I could get a pair of 80 gig drives or something and mirror them using SVM (which was recommended to me by someone) or use ZFS for it, but I was a bit nervous since ZFS boot is still so new, but that''d be just a mirror (raid1 style) nothing fancy there, just good enough for booting and liveupgrade I was told to partition this way to make liveupgrade easy: / /lu = identical space as / swap This message posted from opensolaris.org
mike wrote:> And terminology-wise, one or more zpools create zdevs right?No that isn''t correct. One or move vdevs create a pool. Each vdev in a pool can be a different type, eg a mix or mirror, raidz, raidz2. There is no such thing as zdev. -- Darren J Moffat
> No that isn''t correct.> One or move vdevs create a pool. Each vdev in a pool can be a > different type, eg a mix or mirror, raidz, raidz2.> There is no such thing as zdev.Sorry :) Okay, so you can create a zpool from multiple vdevs. But you cannot add more vdevs to a zpool once the zpool is created. Is that right? That''s what it sounded like someone said earlier.
mike wrote:> And terminology-wise, one or more zpools create zdevs right? >Lets get the terminology right first. You can have more than one zPool. Each zPool can have many filesystems which all share *ALL* the space in the pool. Each zPool can get it''s space from one or more vDevs. (Yes you can put more than one vDev in a single pool, and space from all vDevs is available to all filesystems - No artificial bounadaries here.) Each vDev can be one of several types. Single - 1 device - No redundancy - 100% space usable. Mirror - 2 devices min - Redundandacy increases as you add mirror devices. Available space is equal to smallest device. RAIDZ1 - 3 devices min - Redundancy allows 1 failure at a time. Available space is (n-1) times smallest device. RAIDZ2 - 4 devices min - Redundancy allows 2 failures at once. Available space is (n-2) times smallest device. You can ( though I don''t know why you''d want to) put vDevs of different types in the same pool.> > zpool create tank \ > raidz disk1 disk2 disk3 disk4 disk5 disk6 disk7 \ > raidz disk8 disk9 disk10 disk11 disk12 disk13 disk14 \ > spare disk15 > > That''s pretty much dual parity/dual failure for both pools assuming I swap out the dead drive pretty quickly. Yeah? >In this example, you have one pool, with 2 vDevs. Each vDev can sustain one failure, but 2 failures in either vDev will take out the whole pool. If you really can afford to trade performance (and no I don''t know how much you lose) for redundancy, it''d be better to do: zpool create tank \ raidz2 disk1 disk2 disk3 disk4 disk5 disk6 disk7 \ disk8 disk9 disk10 disk11 disk12 disk13 disk14 \ spare disk15 Since now you can have any 2 disks fail (3 if the spare has time to get used,) and the same space as your example. -Kyle
mike wrote:>> No that isn''t correct. > >> One or move vdevs create a pool. Each vdev in a pool can be a >> different type, eg a mix or mirror, raidz, raidz2. > >> There is no such thing as zdev. > > Sorry :) > > Okay, so you can create a zpool from multiple vdevs. But you cannot > add more vdevs to a zpool once the zpool is created. Is that right?Yes you can add more vdevs. What you can''t do is add more disks to a raidz or raidz2 vdev but you can add another raidz vdev to the pool. For example: I start out with a pool of 6 disks in a raidz that is one vdev. I then add another 6 disks raidz that is two vdevs. Still one pool it looks like this: NAME STATE READ WRITE CKSUM cube ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c5t0d0 ONLINE 0 0 0 c5t1d0 ONLINE 0 0 0 c5t2d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 c5t4d0 ONLINE 0 0 0 c5t5d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c5t8d0 ONLINE 0 0 0 c5t9d0 ONLINE 0 0 0 c5t10d0 ONLINE 0 0 0 c5t11d0 ONLINE 0 0 0 c5t12d0 ONLINE 0 0 0 c5t13d0 ONLINE 0 0 0 I could if I wanted to add another vdev to this pool but it doesn''t have to be raidz it could be raidz2 or mirror. You can add more "sides" to a mirror vdev and you can turn a single disk vdev into a mirror. You can not mirror a raidz or raidz a mirror.> That''s what it sounded like someone said earlier.If they did they are wrong, hope the above clarifies. -- Darren J Moffat
On Fri, 22 Aug 2008, mike wrote:> Oh sorry - for boot I don''t care if it''s redundant or anything.8-O> Worst case the drive fails, I replace it and reinstall, and just re-mount the ZFS stuff.If you use a ZFS mirrored root, you just replace a drive when it fails. None of this reinstall nonsense.> If I have the space in the case and the ports I could get a pair of > 80 gig drives or something and mirror them using SVM (which was > recommended to me by someone) or use ZFS for it, but I was a bit > nervous since ZFS boot is still so new, but that''d be just a mirrorZFS boot works fine; it only recently integrated into Nevada, but it has been in use for quite some time now.> I was told to partition this way to make liveupgrade easy: > > / > /lu = identical space as / > swapEven better: just use ZFS root and let it handle the details. -- Rich Teer, SCSA, SCNA, SCSECA CEO, My Online Home Inventory URLs: http://www.rite-group.com/rich http://www.linkedin.com/in/richteer http://www.myonlinehomeinventory.com
mike wrote:> > > Sorry :) > > Okay, so you can create a zpool from multiple vdevs. But you cannot > add more vdevs to a zpool once the zpool is created. Is that right?Nope. That''s exactly what you *CAN* do. So say today you only really need 6TB usable, you could go buy 8 of your 1TB disks, and setup a pool with a single 7 disk RAIDZ1 vDev, and a single spare today. Later when disks are cheaper, and you need the space you could add a second 7 disk RAIDZ1 to the pool. This way you''d gradually grow into exaclty the example you gave earlier. Also while it makes sense to use the same size drives in the same vDev, additional vDevs you add later can easily be made from different size drives. For the exaple above, when you got around to adding the second vDev, 2TB disks might be out, for the same space, you could create a vDev with fewer 2TB drives, or a vDev with the same number of drives and add twice the space, or some combo inbetween - Just because oyur first vDev had 7 disks doesn''t mean the others have to. Antoher note, as someone said earlier, if you can go to 16 drives, you should consider 2 8disk RAIDZ2 vDevs, over 2 7disk RAIDZ vDevs with a spare, or (I would think) even a 14disk RAIDZ2 vDev with a spare. If you can (now or later) get room to have 17 drives, 2 8disk RAIDZ2 vDevs with a spare would be your best bet. And remember you can grow into it... 1 vDev and spare now, second vDev later. -Kyle> That''s what it sounded like someone said earlier. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 8/22/08, Darren J Moffat <darrenm at opensolaris.org> wrote:> I could if I wanted to add another vdev to this pool but it doesn''t > have to be raidz it could be raidz2 or mirror.> If they did they are wrong, hope the above clarifies.I get it now. If you add more disks they have to be in their own mirror/raidz/etc setup, but they can be added to the same large "pool" of space.
On 8/22/08, Kyle McDonald <KMcDonald at egenera.com> wrote:> Antoher note, as someone said earlier, if you can go to 16 drives, you > should consider 2 8disk RAIDZ2 vDevs, over 2 7disk RAIDZ vDevs with a spare, > or (I would think) even a 14disk RAIDZ2 vDev with a spare. > > If you can (now or later) get room to have 17 drives, 2 8disk RAIDZ2 vDevs > with a spare would be your best bet. And remember you can grow into it... 1 > vDev and spare now, second vDev later.This is actually probably a good idea for me, as buying that many disks right now would be a huge dent in the budget. Although it''s for a good cause, I have a network of slowly overheating drives in USB cases with data spread all over the place. Bitrot is a daily concern. I guess I can go with an 8 disk raidz2 to start with; and add a second one later. That''s a decent % of the overall cost of the machine going towards parity...
It looks like this will be the way I do it: initially: zpool create mypool raidz2 disk0 disk1 disk2 disk3 disk4 disk5 disk6 disk7 when I need more space and buy 8 more disks: zpool add mypool raidz2 disk8 disk9 disk10 disk11 disk12 disk13 disk14 disk15 Correct?> Enable compression, and set up multiple raidz2 groups. Depending on > what you''re storing, you may get back more than you lose to parity.It''s DVD backups and media files. Probably everything has already been compressed pretty well by the time it hits ZFS.> That''s a lot of spindles for a home fileserver. I''d be inclined to go > with a smaller number of larger disks in mirror pairs, allowing me to > buy larger disks in pairs as they come on the market to increase > capacity.Or do smaller groupings of raidz1''s (like 3 disks) so I can remove them and put 1.5TB disks in when they come out for instance?
On 8/22/08, Rich Teer <rich.teer at rite-group.com> wrote:> ZFS boot works fine; it only recently integrated into Nevada, but it > has been in use for quite some time now.Yeah I got the install option when I installed snv_94 but wound up not having enough disks to use it.> Even better: just use ZFS root and let it handle the details.I assume since it''s ZFS root it can make it''s own filesystem then for liveupgrade or something (that''s the "details" you speak of) - in that case, I''m cool with that.
On Fri, Aug 22, 2008 at 1:08 PM, mike <mike503 at gmail.com> wrote:> It looks like this will be the way I do it: > > initially: > zpool create mypool raidz2 disk0 disk1 disk2 disk3 disk4 disk5 disk6 disk7 > > when I need more space and buy 8 more disks: > zpool add mypool raidz2 disk8 disk9 disk10 disk11 disk12 disk13 disk14 > disk15 > > Correct? > > > > Enable compression, and set up multiple raidz2 groups. Depending on > > what you''re storing, you may get back more than you lose to parity. > > It''s DVD backups and media files. Probably everything has already been > compressed pretty well by the time it hits ZFS. > > > That''s a lot of spindles for a home fileserver. I''d be inclined to go > > with a smaller number of larger disks in mirror pairs, allowing me to > > buy larger disks in pairs as they come on the market to increase > > capacity. > > Or do smaller groupings of raidz1''s (like 3 disks) so I can remove > them and put 1.5TB disks in when they come out for instance?Somebody correct me if I''m wrong. ZFS (early versions) did not support removing zdevs from a pool. It was a future feature. Is it done yet?> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- chris -at- microcozm -dot- net === Si Hoc Legere Scis Nimium Eruditionis Habes -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080822/07c78b8c/attachment.html>
mike wrote:> Or do smaller groupings of raidz1''s (like 3 disks) so I can remove > them and put 1.5TB disks in when they come out for instance? >I wouldn''t reduce it to 3 disks (should almost mirror if you go that low.) Remember, while you can''t take a drive out of a vDev, or a vDev out of a pool, you can *replace* the drives in a vDev. For example if you have 8 1TB drives in a RAIDZ (1 or 2) vDev, and buy 8 1.5TB drives, instead of adding a second vDev which is always an option, you can replace 1 drive at a time, and as soon as the last drive in the vDev is swapped, you''ll see the space in the pool jump. Granted, if you need to buy drives gradually, swapping out 3 at at time (with 3 drive vDevs) is easier than 8 at a time, but you''ll lose 33% of your space to parity, instead of 25% and you''ll only be able to lose one disk (of each set of 3) at a time. -Kyle> _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
While on the subject, in a home scenario where one actually notices the electric bill personally, is it more economical to purchase a big expensive 1tb disk and save on electric to run it for five years or to purchase two cheap 1/2 TB disk and spend double on electric for them for 5 years? Has anyone calculated this? If this is too big a turn for this thread, let''s start a new one and/or perhaps find an appropriate forum. thx jake On Fri, Aug 22, 2008 at 1:14 PM, Chris Cosby <ccosby+zfs at gmail.com> wrote:> > > On Fri, Aug 22, 2008 at 1:08 PM, mike <mike503 at gmail.com> wrote: >> >> It looks like this will be the way I do it: >> >> initially: >> zpool create mypool raidz2 disk0 disk1 disk2 disk3 disk4 disk5 disk6 disk7 >> >> when I need more space and buy 8 more disks: >> zpool add mypool raidz2 disk8 disk9 disk10 disk11 disk12 disk13 disk14 >> disk15 >> >> Correct? >> >> >> > Enable compression, and set up multiple raidz2 groups. Depending on >> > what you''re storing, you may get back more than you lose to parity. >> >> It''s DVD backups and media files. Probably everything has already been >> compressed pretty well by the time it hits ZFS. >> >> > That''s a lot of spindles for a home fileserver. I''d be inclined to go >> > with a smaller number of larger disks in mirror pairs, allowing me to >> > buy larger disks in pairs as they come on the market to increase >> > capacity. >> >> Or do smaller groupings of raidz1''s (like 3 disks) so I can remove >> them and put 1.5TB disks in when they come out for instance? > > Somebody correct me if I''m wrong. ZFS (early versions) did not support > removing zdevs from a pool. It was a future feature. Is it done yet? > >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > -- > chris -at- microcozm -dot- net > === Si Hoc Legere Scis Nimium Eruditionis Habes > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >
mike wrote:> On 8/22/08, Rich Teer <rich.teer at rite-group.com> wrote: > > >> ZFS boot works fine; it only recently integrated into Nevada, but it >> has been in use for quite some time now. >> > > Yeah I got the install option when I installed snv_94 but wound up not > having enough disks to use it. > >You only need 1 disk to use ZFS root. You won''t have any redundancy, but as Darren said in another email, you can convert single device vDevs to Mirror''d vDevs later without any hassle.>> Even better: just use ZFS root and let it handle the details. >> > > I assume since it''s ZFS root it can make it''s own filesystem then for > liveupgrade or something (that''s the "details" you speak of) - in that > case, I''m cool with that. >Exactly. Filesystems can be created on the fly at any time on ZFS. I think it''s actually Live Upgrade (today) or SnapUpgrade (the future) that will manage creating the ZFS''s for you. -Kyle> _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On 8/22/08, Kyle McDonald <KMcDonald at egenera.com> wrote:> You only need 1 disk to use ZFS root. You won''t have any redundancy, but as > Darren said in another email, you can convert single device vDevs to > Mirror''d vDevs later without any hassle.I''d just get some 80 gig disks and mirror them. Might as well :) (as long as I''m not killing my power supply limits with it)> Exactly. Filesystems can be created on the fly at any time on ZFS. I think > it''s actually Live Upgrade (today) or SnapUpgrade (the future) that will > manage creating the ZFS''s for you.Awesome. Excellent reuse of the technology available.
On Fri, 22 Aug 2008, Jacob Ritorto wrote:> While on the subject, in a home scenario where one actually notices > the electric bill personally, is it more economical to purchase a big > expensive 1tb disk and save on electric to run it for five years or to > purchase two cheap 1/2 TB disk and spend double on electric for them > for 5 years? Has anyone calculated this?In terms of potential data loss, more smaller disks will be more reliable and there is less risk of a problem while replacing a failed disk. In terms of cost, the 1/2 size disk may be of the same quality/performance grade but cost 1/2 as much as the bleeding-edge capacity disk. Power consumption has more to do with the drive''s targeted application than its raw capacity. Solaris supports power-management so you could allow it to spin down the drives when they are not being used, with the limitation that it might take 30 seconds before data is available if you have been away for a while. If the drives are spun down, zfs''s access behavior will cause all of them to spin up at once. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
>>>>> "m" == mike <opensolaris at mike2k.com> writes:m> that could only be accomplished through combinations of pools? m> i don''t really want to have to even think about managing two m> separate "partitions" - i''d like to group everything together m> into one large 13tb instance You''re not misreading the web pages. You can do that. I suggested two pools because of problems like Erik''s, and other known bugs. two pools will also protect you from operator mistakes, like if you cut-and-paste the argument to ''zfs destroy'' and end up with an embedded newline in the cut buffer at exactly the wrong spot, or mistakenly add an unredundant vdev, or get confused about how the snapshot tree works, or upgrade your on-disk-format then want to downgrade Solaris, or whatever. This mailing list is a catalog of reasons you need to have an offline backup pool, and you have enough disks to do it. The datacenter crowd on the list doesn''t need this because they have tape, or they have derived datasets which can be recreated. You do need it. I think you''ve gotten to the ``try it and see'''' stage. Why not try making pools in a bunch of different combinations and loading them with throwaway data. You can test performance. try scrubbing. test the redundancy by pulling drives, and see how the hot sparing works. Try rebooting during a hot-spare resilver, because during the actual rebuild this will probably happen a few times until you track down the driver''s poor error handling of a marginal drive. deliberately include marginal drives in the pool if you have some. You can get really better information this way, especially if the emails are too long to read. if you want a list of things to test.... :) seriously though if you have sixteen empty drives, that''s a fantastic situation. I never had that---I had to move my sixteen drives into zfs, 2 - 4 drives at a time. I think you ought to use your array for testing for at least a month. You need to burn in the drives for a month anyway because of infant mortality. m> you cannot add more vdevs to a zpool once the zpool is m> created. Is that right? That''s what it sounded like someone m> said earlier. I didn''t mean to say that. If you have empty devices, of course you can add them to a pool as a new vdev (though, as Darern said, once a vdev is added, you''re stuck with a vdev of that type, and if it''s raidz{,2} of that stripe-width, for the life of the pool. you can never remove a vdev.). you asked: m> can you combine two zpools together? you can''t combine two existing pools and keep the data in tact. You have to destroy one pool and add its devices to the other. I''m repeating myself: m> can you combine two zpools together? c> no. You can have many vdevs in one pool. for example you can c> have a mirror vdev and a raidz2 vdev in the same pool. You c> can also destroy pool B, and add its (now empty) devices to c> pool A. but once two separate pools are created you can''t c> later smush them together. so, I am not sure how this sounds. I usually rely on you to read the whole paragraph not the first word, but I guess the messages are just too long. How about ``yes, you can combine two pools. But before combining them you have to destroy one of the pools and all the data inside it. then, you can add the empty space to the other pool.'''' -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080822/aa1158f5/attachment.bin>
Yes, that looks pretty good mike. There are a few limitations to that as you add the 2nd raidz2 set, but nothing major. When you add the extra disks, your original data will still be stored on the first set of disks, if you''ve any free space left on those you''ll then get some data stored across all the disks, and then I think that once the first set are full, zfs will just start using the free space on the newer 8. It shouldn''t be a problem for a home system, and all that will happen silently in the background. It''s just worth knowing that you don''t necessarily get the full performance of a 16 disk array when you do it in two stages like that. Also, you mentioned in an earlier post about dual parity raid. Raid-z2 is very different from raid-z1, and it''s not just a case of having a hot spare. Having the ability to loose two drives before data loss makes a massive difference to your chance of experiencing data loss. The most dangerous period on a raidz (or raid-5) pool is after a drive fails, when you''re rebuilding the data onto a new disk, and with larger drives the risk is increasing. While you''re rebuilding the array, your data isn''t protected at all, but you''re having to re-read all of it in order to populate the new disk. Any errors during that read will result in data corruption at best, and a dead raid array at worst. With drives getting bigger, the chances of an error are increasing. From memory I think it''s something around a 10% chance of an error for every 5TB you''re rebuilding. Of course, that could well be just a single bit error, but I''ve seen a fair few raid-5 arrays die and experienced a couple of near misses, so I''m very paranoid now when it comes to my data. Ross This message posted from opensolaris.org
On 8/22/08, Ross <myxiplx at hotmail.com> wrote:> Yes, that looks pretty good mike. There are a few limitations to that as you add the 2nd raidz2 set, but nothing major. When you add the extra disks, your original data will still be stored on the first set of disks, if you''ve any free space left on those you''ll then get some data stored across all the disks, and then I think that once the first set are full, zfs will just start using the free space on the newer 8.> It shouldn''t be a problem for a home system, and all that will happen silently in the background. It''s just worth knowing that you don''t necessarily get the full performance of a 16 disk array when you do it in two stages like that.that''s fine. I''ll basically be getting the performance of an 8 disk raidz2 at worst, yeah? i''m fine with how the space will be distributed. after all this is still a huge improvement over my current haphazard setup :P
Yup, you got it, and an 8 disk raid-z2 array should still fly for a home system :D I''m guessing you''re on gigabit there? I don''t see you having any problems hitting the bandwidth limit on it. Ross> Date: Fri, 22 Aug 2008 11:11:21 -0700 > From: mike503 at gmail.com > To: myxiplx at hotmail.com > Subject: Re: [zfs-discuss] Best layout for 15 disks? > CC: zfs-discuss at opensolaris.org > > On 8/22/08, Ross <myxiplx at hotmail.com> wrote: > > > Yes, that looks pretty good mike. There are a few limitations to that as you add the 2nd raidz2 set, but nothing major. When you add the extra disks, your original data will still be stored on the first set of disks, if you''ve any free space left on those you''ll then get some data stored across all the disks, and then I think that once the first set are full, zfs will just start using the free space on the newer 8. > > > It shouldn''t be a problem for a home system, and all that will happen silently in the background. It''s just worth knowing that you don''t necessarily get the full performance of a 16 disk array when you do it in two stages like that. > > that''s fine. I''ll basically be getting the performance of an 8 disk > raidz2 at worst, yeah? i''m fine with how the space will be > distributed. after all this is still a huge improvement over my > current haphazard setup :P_________________________________________________________________ Get Hotmail on your mobile from Vodafone http://clk.atdmt.com/UKM/go/107571435/direct/01/ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080822/29ad96a8/attachment.html>
yeah i am on gigabit, but the clients are things like an xbox which is only 10/100, etc. right now the setup works fine. i''m thinking the new CIFS implementation should make it run even cleaner too. On 8/22/08, Ross Smith <myxiplx at hotmail.com> wrote:> Yup, you got it, and an 8 disk raid-z2 array should still fly for a home > system :D I''m guessing you''re on gigabit there? I don''t see you having any > problems hitting the bandwidth limit on it. > > Ross