I was recently evaluating much the same question but with out only a single pool and sizing my disks equally. I only need about 500GB of usable space and so I was considering the value of 4x 250GB SATA Drives versus 5x 160GB SATA drives. I had intended to use an AMS 5 disk in 3 5.25" bay hot-swap backplane. http://www.american-media.com/product/backplane/sata300/sata300.html I priced Seagate 250GB and 160GB SATA drives at $70 and $54 USD each, respectively. The backplane runs about $130. The config options I considered are: RZ1 +spare RZ2 -spare mirror(s) (+spare w/ 5 160GB disks) Using Richard''s blogs to help my evaluation I made the following conclusions between the choices... - Mirrors cost you 50% of your total space, but give you the best performance and "average" fault tolerance. - RZ1+spare gives you the same space (2 data, 1 parity, 1 spare w/ 4 disks), middle of the road performance and the worst fault tolerance. - RZ2 gives you the same space as well (2 data, 2 parity w/ 4 disks), the worst performance and the best fault tolerance. - spares give you negligible benefit over time with respect to cost. What I mean by this is that for the "home" environment, if a disk costs $100 today, next year it will cost $80, the year after even less. Paying up front for a relatively small increase in reliability that declines quickly as time goes on is probably not a good "value" decision. In addition, they are readily available in most populated areas if you can''t wait for warranty replacement...and Seagate offers 5 yr warranty (which is why I spec''d Seagate not WD). My conclusion was to go with 4x 250GB in a single pool with two mirror vdevs. Overall I''d suggest that you scale your disks to be in proportion with your target total usable size and keep it simple and to an even number of disks. I still have yet to purchase the system due to my issues with finding the right board with the right SATA controller. My desktop system at home runs an nVidia 590a chipset on a Foxconn motherboard and Solaris U3 will only recognize the DVD drive during installation. A side note to that compatibility issue... quick/dirty solution to the SATA controller issue... install VMWare Server and configure VMWare to supply the guest raw disks. VMWare Server provides a nice hardware abstraction layer that has turned my a partition on my SATA disk into a SCSI hard drive that Solaris interacts with swimmingly. I loose a little system resource to Host OS and VMWare overhead, but at least I have a fully time Solaris system running on my home network on hardware it would not otherwise agree with. On 9/27/07, zfs-discuss-request at opensolaris.org <zfs-discuss-request at opensolaris.org> wrote:> Date: Thu, 27 Sep 2007 10:10:24 PDT > From: Christopher <joffer at online.no> > Subject: Re: [zfs-discuss] Best option for my home file server? > To: zfs-discuss at opensolaris.org > Message-ID: <11277343.1190913084839.JavaMail.Twebapp at oss-app1> > Content-Type: text/plain; charset=UTF-8 > > Hmm.. Thanks for the input. I want to have the most space but still need a raid in some way to have redundancy. > > I''ve added it up and found this: > ggendel - your suggestiong makes me "loose" 1TB - Loose 250GBx2 for the raid-1 ones and then 500GB from a 3x500GB = 1TB > bonwick - your first suggestion makes me "loose" 1TB. The second 750GB. The third, still 750GB but I gain 500GB more, since I know only loose 1/3 of 1500 instead of 1/2 of 1000. > > ggendel - yeah I know you would degrade both pools, but you would still be able to recover, unless our good friend Murphy comes around in between, as I would expect him to :-/ > > So, as bonwick said - let''s keep it simple :) No need to make it very complex. > > How is SATA support in OpenSolaris these days. I''ve read about ppl saying it has poor support, but I believe it was blogs and suchs from 2006. I downloaded the Developer Edition yesterday. > > I can have 10 SATA disks in my tower (8 onbord sata connections and 2 from a controller card). Why not fill it :) > > I made a calculation, buying 1x500+3x750, 4x750 or 4x500 disks. The price/GB doesn''t differ much here in Norway. > > Option 1 - Buying 4x750GB disks: > 4x250 RaidZ - 750/250 (Raid size/lost to redundancy) > 2x500 Raid1 - 500/500 > 4x750 RaidZ - 2250/750 > Equals: 3500/1500 (3500GB space / 1500GB lost to redundancy) > Cost: 4x750 costs NOK6000 = US$1100 > > Option 2 - Buy 1x500 + 3x750 > 4x250 RaidZ - 750/250 > 3x500 Raid1 - 1000/500 > 3x750 RaidZ - 1500/750 > Equals 3250/1500 > 1x750 costs NOK 1500 = US$ 270 > 3x500 costs NOK 3000 = US$ 550 > Total US$ 820 > > Option 3 - Buying 4x500GB disks: > 4x250 RaidZ - 750/250 > 6x500 Raid1 - 2500/500 > Equals: 3250/750 > Cost: 4x500 costs NOK4000 = US$ 720 > > Option 2 is not winning in either cost or space, so thats out. > Option 1 gives me 250GB more space but costs me NOK 2000 / US$ 360 more than option 3. For NOK 2000 I could get two more 500GB disks or one big 1TB disk. > > Obviously from a cost AND size perspective it would be best/smart to go for option 3 and have a raidz of 4x250 and one of 6x500. > > Comments? > > > This message posted from opensolaris.org > >
I considered this as well, but that''s the beauty of marrying ZFS with a hotplug SATA backplane :) I chose the to use the 5-in-3 hot-swap chassis in order to give me a way to upgrade capacity in place, though the 4-in-3 would be as easy, though with higher risk. 1. hot-plug a new 500GB SATA disk into the 5th spot in the backplane. 2. zpool replace mypool c1t3d0 c1t4d0 where c1t[0-3]d0 are my currently active 250GB drives 3. wait for resilver to complete. 4. hot-pull c1t3d0 and pull the drive from the chassis, replace with new 500GB drive, hot-plug back into backplane. 5. zpool replace mypool c1t0d0 c1t3d0 6. wait for resilver to complete, rinse and repeat. As soon as all disks have been replaced, my zpool will be 1TB, not 500GB. On 9/27/07, zfs-discuss-request at opensolaris.org <zfs-discuss-request at opensolaris.org> wrote:> Date: Thu, 27 Sep 2007 13:20:15 -0500 > From: David Dyer-Bennet <dd-b at dd-b.net> > Subject: Re: [zfs-discuss] Best option for my home file server? > To: zfs-discuss at opensolaris.org > Message-ID: <46FBF45F.7050608 at dd-b.net> > Content-Type: text/plain; charset=UTF-8; format=flowed > > Blake wrote: > >> Obviously from a cost AND size perspective it would be best/smart to go > >> for option 3 and have a raidz of 4x250 and one of 6x500. > >> > >> Comments? > >> > >> > > How long are you going to need this data? Do you have an easy and quick > way to back it all up? Is the volume you need going to grow over time? > For *my* home server, the need to expand over time ended up dominating > the disk architecture, and I chose a less efficient (more space/money > lost to redundant storage) architecture that was easier to upgrade in > small increments, because that fit my intention to maintain the data > long-term, and the lack of any efficient easy way to back up and restore > the data (I *do* back it up to external firewire disks, but it takes 8 > hours or so, so I don''t want to have to have the system down for a full > two-way copy when I need to upgrade the disk sizes). > > -- > David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ > Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/ > Photos: http://dd-b.net/photography/gallery/ > Dragaera: http://dragaera.info >
I did that - it was nice. Took forever though on my PIII 700mhz :^) blake/ On 9/27/07, Solaris <solaris at alderhost.net> wrote:> > I considered this as well, but that''s the beauty of marrying ZFS with > a hotplug SATA backplane :) > > I chose the to use the 5-in-3 hot-swap chassis in order to give me a > way to upgrade capacity in place, though the 4-in-3 would be as easy, > though with higher risk. > > 1. hot-plug a new 500GB SATA disk into the 5th spot in the backplane. > > 2. zpool replace mypool c1t3d0 c1t4d0 > where c1t[0-3]d0 are my currently active 250GB drives > > 3. wait for resilver to complete. > > 4. hot-pull c1t3d0 and pull the drive from the chassis, replace with > new 500GB drive, hot-plug back into backplane. > > 5. zpool replace mypool c1t0d0 c1t3d0 > > 6. wait for resilver to complete, rinse and repeat. > > As soon as all disks have been replaced, my zpool will be 1TB, not 500GB. > > On 9/27/07, zfs-discuss-request at opensolaris.org > <zfs-discuss-request at opensolaris.org> wrote: > > Date: Thu, 27 Sep 2007 13:20:15 -0500 > > From: David Dyer-Bennet <dd-b at dd-b.net> > > Subject: Re: [zfs-discuss] Best option for my home file server? > > To: zfs-discuss at opensolaris.org > > Message-ID: <46FBF45F.7050608 at dd-b.net> > > Content-Type: text/plain; charset=UTF-8; format=flowed > > > > Blake wrote: > > >> Obviously from a cost AND size perspective it would be best/smart to > go > > >> for option 3 and have a raidz of 4x250 and one of 6x500. > > >> > > >> Comments? > > >> > > >> > > > > How long are you going to need this data? Do you have an easy and quick > > way to back it all up? Is the volume you need going to grow over time? > > For *my* home server, the need to expand over time ended up dominating > > the disk architecture, and I chose a less efficient (more space/money > > lost to redundant storage) architecture that was easier to upgrade in > > small increments, because that fit my intention to maintain the data > > long-term, and the lack of any efficient easy way to back up and restore > > the data (I *do* back it up to external firewire disks, but it takes 8 > > hours or so, so I don''t want to have to have the system down for a full > > two-way copy when I need to upgrade the disk sizes). > > > > -- > > David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ > > Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/ > > Photos: http://dd-b.net/photography/gallery/ > > Dragaera: http://dragaera.info > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070927/05ae1c0b/attachment.html>
David Dyer-Bennet
2007-Sep-27 20:01 UTC
[zfs-discuss] Best option for my home file server?
Solaris wrote:> I considered this as well, but that''s the beauty of marrying ZFS with > a hotplug SATA backplane :) > > I chose the to use the 5-in-3 hot-swap chassis in order to give me a > way to upgrade capacity in place, though the 4-in-3 would be as easy, > though with higher risk. > > 1. hot-plug a new 500GB SATA disk into the 5th spot in the backplane. > > 2. zpool replace mypool c1t3d0 c1t4d0 > where c1t[0-3]d0 are my currently active 250GB drives > > 3. wait for resilver to complete. > > 4. hot-pull c1t3d0 and pull the drive from the chassis, replace with > new 500GB drive, hot-plug back into backplane. > > 5. zpool replace mypool c1t0d0 c1t3d0 > > 6. wait for resilver to complete, rinse and repeat. > > As soon as all disks have been replaced, my zpool will be 1TB, not 500GB. >Sure, that''s the same process I have in mind, it''s just that you have to replace all the disks in the vdev at once to get the new capacity, so I felt that sticking to mirrors (meaning only two disks in the vdev) was more suitable for my expected future history. -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
You don''t have to do it all at once... ZFS will function fine with 1 large disk and 1 small disk in a mirror, it just means you will only have the as much space as the smaller disk. As of things now, if you have multiple vdevs in a pool and they are of diverse capacities, the striping becomes less effective (? not sure what the right word here is). Each vdev will be capable of being used to it''s entire capacity. So suppose in your situation if you have 4 equal disks, configured in two mirrors. In the future you reach 90% capacity and choose to upgrade by doubling the size of one vdev. Your pool will use will strip using the remaining 10% of the original capacity as expected and then use only the larger vdev from there on. Further in the future if you then choose to upgrade again by increasing the capacity of the smaller vdev, the stiping will resume, but there is no way restripe all of the data evenly across both disks with out copying it off, removing it and copying it back again. Overall, I''d would think that if being prepared to upgrade your entire pool is a concern, that regardless of your zpool configuration you would want to start saving from day 1 for the upgrade, wait until the capacity becomes near critical and upgrade the entire pool. This as opposed to being more reactionary and making a snap decision to buy what you can afford to bandaid the situation. Afterall, by the time you get around to upgrading the other vdev''s the price/GB will have dropped even further, assuming you don''t replace them with yet even larger disks than the other vdev. On 9/27/07, David Dyer-Bennet <dd-b at dd-b.net> wrote: <snip>> > > > Sure, that''s the same process I have in mind, it''s just that you have to > replace all the disks in the vdev at once to get the new capacity, so I > felt that sticking to mirrors (meaning only two disks in the vdev) was > more suitable for my expected future history. > > -- > David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ > Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/ > Photos: http://dd-b.net/photography/gallery/ > Dragaera: http://dragaera.info > >
David Dyer-Bennet
2007-Sep-27 21:00 UTC
[zfs-discuss] Best option for my home file server?
Solaris wrote:> You don''t have to do it all at once... ZFS will function fine with 1 > large disk and 1 small disk in a mirror, it just means you will only > have the as much space as the smaller disk. >Sure, but you get no *benefit* until you''ve done it all, so you "have to" in terms of actually upgrading the vdev size.> As of things now, if you have multiple vdevs in a pool and they are of > diverse capacities, the striping becomes less effective (? not sure > what the right word here is). Each vdev will be capable of being used > to it''s entire capacity. > > So suppose in your situation if you have 4 equal disks, configured in > two mirrors. In the future you reach 90% capacity and choose to > upgrade by doubling the size of one vdev. Your pool will use will > strip using the remaining 10% of the original capacity as expected and > then use only the larger vdev from there on. Further in the future if > you then choose to upgrade again by increasing the capacity of the > smaller vdev, the stiping will resume, but there is no way restripe > all of the data evenly across both disks with out copying it off, > removing it and copying it back again. >Well, as I said, I see no realistic risk of pushing the performance limits of a home file server. I get *less* bandwidth through the network than I do from a direct-connected drive, and that''s just one single drive.> Overall, I''d would think that if being prepared to upgrade your > entire pool is a concern, that regardless of your zpool configuration > you would want to start saving from day 1 for the upgrade, wait until > the capacity becomes near critical and upgrade the entire pool. This > as opposed to being more reactionary and making a snap decision to buy > what you can afford to bandaid the situation. Afterall, by the time > you get around to upgrading the other vdev''s the price/GB will have > dropped even further, assuming you don''t replace them with yet even > larger disks than the other vdev. >I certainly expect each vdev to leapfrog the other when upgraded. That was the point. This way I have less paid-for but unused capacity lying around, and given the price and size of disk drives, that''s a money-saver.> On 9/27/07, David Dyer-Bennet <dd-b at dd-b.net> wrote: > <snip> > >> Sure, that''s the same process I have in mind, it''s just that you have to >> replace all the disks in the vdev at once to get the new capacity, so I >> felt that sticking to mirrors (meaning only two disks in the vdev) was >> more suitable for my expected future history. >> >> -- >> David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ >> Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/ >> Photos: http://dd-b.net/photography/gallery/ >> Dragaera: http://dragaera.info >> >> >> > >-- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
A couple of Neptunes or a server with a Niagara T2 will fix that (10GigE) :^) On 9/27/07, David Dyer-Bennet <dd-b at dd-b.net> wrote:> > Well, as I said, I see no realistic risk of pushing the performance > limits of a home file server. I get *less* bandwidth through the > network than I do from a direct-connected drive, and that''s just one > single drive.-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070927/36aa6ff1/attachment.html>