John Klimek
2009-Aug-12 18:48 UTC
[zfs-discuss] Can ZFS dynamically grow pool sizes? (re: Windows Home Server)
I''m a software developer with a little bit of experience in Linux but I''ve been wanting to build a fileserver and I''ve recently heard about ZFS. Right now I''m considering Windows Home Server because I really don''t need every file mirrored/backed-up but I do like what I heard about ZFS. Anyways, if I have a bunch of different size disks (1.5 TB, 1.0 TB, 500 GB, etc), can I put them all into one big array and have data redundancy, etc? (RAID-Z?) Can I also expand that array at any time? One thing that I definitely want is one single network share (\\server\movies) that I can transfer files to and have ZFS figure out how to place them across my disks. I''d then like to able to add any size disk to my server and expand that storage space. Is this possible with ZFS? -- This message posted from opensolaris.org
Erik Trimble
2009-Aug-12 19:11 UTC
[zfs-discuss] Can ZFS dynamically grow pool sizes? (re: Windows Home Server)
Take a look back through the mail archives for more discussion about this topic (expanding zpools). The short answers are: John Klimek wrote:> I''m a software developer with a little bit of experience in Linux but I''ve been wanting to build a fileserver and I''ve recently heard about ZFS. > > Right now I''m considering Windows Home Server because I really don''t need every file mirrored/backed-up but I do like what I heard about ZFS. > > Anyways, if I have a bunch of different size disks (1.5 TB, 1.0 TB, 500 GB, etc), can I put them all into one big array and have data redundancy, etc? (RAID-Z?) > >Yes. RAID-Z requires a minimum of 3 drives, and it can use different drives. Depending on the size differences, it will do the underlying layout in different ways. Depending on the number and size of the disks, ZFS is likely the best bet for using the most total space.> Can I also expand that array at any time? > >Not in the traditional "I''m adding 1 drive to a 3-disk RAIDZ to make it a 4-disk RAIDZ". See the archives for how zpool expansion is done.> One thing that I definitely want is one single network share (\\server\movies) that I can transfer files to and have ZFS figure out how to place them across my disks. I''d then like to able to add any size disk to my server and expand that storage space. > >This is more a function of Samba (the sharing portion). How the data is stored on disk is a function of any volume manager (ZFS included), and will be done automatically.> Is this possible with ZFS?Not really. Adding random size disks in random amounts isn''t optimal for ANY volume manager, not just ZFS. Due to the way raid sets are set up in a volume manager, you may or may not be able to use the entire new disk space, you may or may not be able to add it to the RAID volume at all, and/or you may or may not be able to migrate the existing RAID set to a different kind of RAID set (i.e. move a RAID5 to RAID6, etc.) No current volume manager or hardware RAID card can do what you want - that''s an incredibly difficult thing to ask. ZFS works best with groups of identical disks, and can be expanded by adding groups of identical disks (not necessarily of the same size as the originals). Once again, please read the archives for more information about expanding zpools. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
Eric D. Mudama
2009-Aug-12 19:28 UTC
[zfs-discuss] Can ZFS dynamically grow pool sizes? (re: Windows Home Server)
On Wed, Aug 12 at 12:11, Erik Trimble wrote:>> Anyways, if I have a bunch of different size disks (1.5 TB, 1.0 TB, >> 500 GB, etc), can I put them all into one big array and have data >> redundancy, etc? (RAID-Z?) >> > Yes. RAID-Z requires a minimum of 3 drives, and it can use > different drives. Depending on the size differences, it will do the > underlying layout in different ways. Depending on the number and > size of the disks, ZFS is likely the best bet for using the most > total space.I don''t believe this is correct, as far as I understand it, RAID-Z will use the lowest-common-denominator for sizing the overall array. You''ll get parity across all three drives, but it won''t alter parity schemes for different regions of the disks. Best bet for a "throw a bunch of random disks in it and don''t worry about it" would probably be a Drobo. Not smoking fast by any stretch, but they appear to create an underlying parity scheme that can maximize space without sacrificing the ability to survive any single-disk failure.>> Can I also expand that array at any time? >> > Not in the traditional "I''m adding 1 drive to a 3-disk RAIDZ to make > it a 4-disk RAIDZ". See the archives for how zpool expansion is > done.This is correct. The smallest unit of easy pool expansion in ZFS is adding vdev. To have redundancy, mirrored vdevs use the fewest physical devices, and you can add mirrored pairs to your pool quite easily. This is what we use on our server in this branch office. --eric -- Eric D. Mudama edmudama at mail.bounceswoosh.org
Erik Trimble
2009-Aug-12 20:44 UTC
[zfs-discuss] Can ZFS dynamically grow pool sizes? (re: Windows Home Server)
Eric D. Mudama wrote:> On Wed, Aug 12 at 12:11, Erik Trimble wrote: >>> Anyways, if I have a bunch of different size disks (1.5 TB, 1.0 TB, >>> 500 GB, etc), can I put them all into one big array and have data >>> redundancy, etc? (RAID-Z?) >>> >> Yes. RAID-Z requires a minimum of 3 drives, and it can use >> different drives. Depending on the size differences, it will do the >> underlying layout in different ways. Depending on the number and >> size of the disks, ZFS is likely the best bet for using the most >> total space. > > I don''t believe this is correct, as far as I understand it, RAID-Z > will use the lowest-common-denominator for sizing the overall array. > You''ll get parity across all three drives, but it won''t alter parity > schemes for different regions of the disks. >Yes, if you stick (say) a 1.5TB, 1TB, and .5TB drive together in a RAIDZ, you will get only 1TB of usable space. Of course, there is always the ability to use partitions instead of the whole disk, but I''m not going to go into that. Suffice to say, RAIDZ (and practically all other RAID controllers, and volume managers) don''t easily deal maximizing space with different size disks. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
Adam Sherman
2009-Aug-12 21:30 UTC
[zfs-discuss] Can ZFS dynamically grow pool sizes? (re: Windows Home Server)
I believe you will get .5 TB in this example, no? A. -- Adam Sherman +1.613.797.6819 On 2009-08-12, at 16:44, Erik Trimble <Erik.Trimble at Sun.COM> wrote:> Eric D. Mudama wrote: >> On Wed, Aug 12 at 12:11, Erik Trimble wrote: >>>> Anyways, if I have a bunch of different size disks (1.5 TB, 1.0 TB, >>>> 500 GB, etc), can I put them all into one big array and have data >>>> redundancy, etc? (RAID-Z?) >>>> >>> Yes. RAID-Z requires a minimum of 3 drives, and it can use >>> different drives. Depending on the size differences, it will do the >>> underlying layout in different ways. Depending on the number and >>> size of the disks, ZFS is likely the best bet for using the most >>> total space. >> >> I don''t believe this is correct, as far as I understand it, RAID-Z >> will use the lowest-common-denominator for sizing the overall array. >> You''ll get parity across all three drives, but it won''t alter parity >> schemes for different regions of the disks. >> > Yes, if you stick (say) a 1.5TB, 1TB, and .5TB drive together in a > RAIDZ, you will get only 1TB of usable space. Of course, there is > always the ability to use partitions instead of the whole disk, but > I''m not going to go into that. Suffice to say, RAIDZ (and > practically all other RAID controllers, and volume managers) don''t > easily deal maximizing space with different size disks. > > -- > Erik Trimble > Java System Support > Mailstop: usca22-123 > Phone: x17195 > Santa Clara, CA > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Carson Gaspar
2009-Aug-12 21:43 UTC
[zfs-discuss] Can ZFS dynamically grow pool sizes? (re: Windows Home Server)
Erik Trimble wrote:> Yes, if you stick (say) a 1.5TB, 1TB, and .5TB drive together in a > RAIDZ, you will get only 1TB of usable space. Of course, there is > always the ability to use partitions instead of the whole disk, but I''m > not going to go into that. Suffice to say, RAIDZ (and practically all > other RAID controllers, and volume managers) don''t easily deal > maximizing space with different size disks.In the example above, The best you can get out of ZFS is 1.5TB. You''d get that by creating 2 mirrors - a (.5TB of 1.5TB partition) + .5TB mirror, and a (1TB of 1.5TB partition) + 1TB mirror. I _think_ that''s also the best you can get, period, but I may be wrong. The absolute cap is 2TB (2/3 of the 3TB total), but in that spindle config I think the cap is 1.5TB. While the "figure it out for me and make it as big as you can while still being safe" magic of drobo is nice for home users, it''s less than ideal for enterprise users that require performance guarantees. It would be nice if somebody created a simple tool that, fed a set of disks, computed the configuration required for maximum usable redundant space. -- Carson
A Darren Dunham
2009-Aug-12 22:17 UTC
[zfs-discuss] Can ZFS dynamically grow pool sizes? (re: Windows Home Server)
> >Yes, if you stick (say) a 1.5TB, 1TB, and .5TB drive together in a > >RAIDZ, you will get only 1TB of usable space.On Wed, Aug 12, 2009 at 05:30:14PM -0400, Adam Sherman wrote:> I believe you will get .5 TB in this example, no?The slices used on each of the three disks will be .5TB. Multiply by (3-1) for a total of 1TB usable. -- Darren
Russell Hansen
2009-Aug-13 18:16 UTC
[zfs-discuss] Can ZFS dynamically grow pool sizes? (re: Windows Home Server)
Is it possible to use the zfs copies property and put the disks individually into a pool? That would give you 3TB (1.5 + 1 + .5) usable. http://blogs.sun.com/relling/entry/zfs_copies_and_data_protection States that copies will be spread across disks. But what I don''t know (and don''t have a test box to figure out) is if losing a disk still kills the whole pool in spite of having multiple copies of the data. I have a sneaking suspicion is the pool would be toast. -Russ> Erik Trimble wrote: > > While the "figure it out for me and make it as big as > you can while still being > safe" magic of drobo is nice for home users, it''s > less than ideal for enterprise > users that require performance guarantees. > > It would be nice if somebody created a simple tool > that, fed a set of disks, > computed the configuration required for maximum > usable redundant space. > > -- > Carson > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- This message posted from opensolaris.org
Eric D. Mudama
2009-Aug-13 20:49 UTC
[zfs-discuss] Can ZFS dynamically grow pool sizes? (re: Windows Home Server)
On Wed, Aug 12 at 17:30, Adam Sherman wrote:>I believe you will get .5 TB in this example, no?1.5T, 1.0T and 0.5T in a single RAID-Z is equivalent to three 0.5T drives in a RAID-Z, which gets you two units worth of capacity and one unit of parity, summing to 1.0T usable. -- Eric D. Mudama edmudama at mail.bounceswoosh.org