Hi-- I''m looking forward to using zfs on my Mac at some point. My desktop server (a dual-1.25GHz G4) has a motley collection of discs that has accreted over the years: internal EIDE 320GB (boot drive), internal 250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive. My guess is that I won''t be able to use zfs on the boot 320 GB drive, at least this year. I''d like to favor available space over performance, and be able to swap out a failed drive without losing any data. So, what''s the best zfs configuration in this situation? The FAQs I''ve read are usually related to matched (in size) drives. Thanks! Lee
On Fri, 4 May 2007, Lee Fyock wrote:> Hi-- > > I''m looking forward to using zfs on my Mac at some point. My desktop > server (a dual-1.25GHz G4) has a motley collection of discs that has > accreted over the years: internal EIDE 320GB (boot drive), internal > 250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive. > > My guess is that I won''t be able to use zfs on the boot 320 GB drive, > at least this year. I''d like to favor available space over > performance, and be able to swap out a failed drive without losing > any data. > > So, what''s the best zfs configuration in this situation? The FAQs > I''ve read are usually related to matched (in size) drives.Seriously, the best solution here is to discard any drive that is 3 years (or more) old[1] and purchase two new SATA 500Gb drives. Setup the new drives as a zfs mirror. Being a believer in diversity, I''d recommend the following two products (one of each): - Western Digital Caviar RE2 WD5000YS 500GB 7200 RPM 16MB Cache SATA 3.0Gb/s Hard Drive [2] - Seagate Barracuda 7200.10 (Perpendicular Recording) ST3500630AS 500GB 7200 RPM 16MB Cache SATA 3.0Gb/s Hard Drive Not being familiar with Macs - I''m not sure about your availability of SATA ports on the motherboard. [1] it continues to amaze me that many sites, large or small, don''t have a (written) policy for mechanical component replacement - whether disk drives or fans. [2] $151 at zipzoomfly.com [3] $130 at newegg.com Regards, Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
On 4-May-07, at 6:53 PM, Al Hopper wrote:> ... > [1] it continues to amaze me that many sites, large or small, don''t > have a > (written) policy for mechanical component replacement - whether disk > drives or fans.You''re not the only one. In fact, while I''m not exactly talking "enterprise" level here - more usually "IT" and we know what that means - I''ve seen many RAID systems purchased and set up without any spare disks on hand, or any thought given to what happens next when one fails. Likely this is a combination of low expectations (you can usually blame Windows and everyone will believe it) from the computing services department combined with a lack of feedback ("you''re fired") when massive data loss occurs. --Toby
Isn''t the benefit of ZFS that it will allow you to use even the most unreliable risks and be able to inform you when they are attempting to corrupt your data? To me it sounds like he is a SOHO user; may not have a lot of funds to go out and swap hardware on a whim like a company might. ZFS in my opinion is well-suited for those without access to continuously upgraded hardware and expensive fault-tolerant hardware-based solutions. It is ideal for home installations where people think their data is safe until the disk completely dies. I don''t know how many non-savvy people I have helped over the years who has no data protection, and ZFS could offer them at least some fault-tolerance and protection against corruption, and could help notify them when it is time to shut off their computer and call someone to come swap out their disk and move their data to a fresh drive before it''s completely failed... - mike On 5/4/07, Al Hopper <al at logical-approach.com> wrote:> On Fri, 4 May 2007, Lee Fyock wrote: > > > Hi-- > > > > I''m looking forward to using zfs on my Mac at some point. My desktop > > server (a dual-1.25GHz G4) has a motley collection of discs that has > > accreted over the years: internal EIDE 320GB (boot drive), internal > > 250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive. > > > > My guess is that I won''t be able to use zfs on the boot 320 GB drive, > > at least this year. I''d like to favor available space over > > performance, and be able to swap out a failed drive without losing > > any data. > > > > So, what''s the best zfs configuration in this situation? The FAQs > > I''ve read are usually related to matched (in size) drives. > > Seriously, the best solution here is to discard any drive that is 3 years > (or more) old[1] and purchase two new SATA 500Gb drives. Setup the new > drives as a zfs mirror. Being a believer in diversity, I''d recommend the > following two products (one of each): > > - Western Digital Caviar RE2 WD5000YS 500GB 7200 RPM 16MB Cache SATA > 3.0Gb/s Hard Drive [2] > - Seagate Barracuda 7200.10 (Perpendicular Recording) ST3500630AS 500GB > 7200 RPM 16MB Cache SATA 3.0Gb/s Hard Drive > > Not being familiar with Macs - I''m not sure about your availability of > SATA ports on the motherboard. > > [1] it continues to amaze me that many sites, large or small, don''t have a > (written) policy for mechanical component replacement - whether disk > drives or fans. > [2] $151 at zipzoomfly.com > [3] $130 at newegg.com > > Regards, > > Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com > Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT > OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 > http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/ > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
mike wrote:> Isn''t the benefit of ZFS that it will allow you to use even the most > unreliable risks and be able to inform you when they are attempting to > corrupt your data? > > To me it sounds like he is a SOHO user; may not have a lot of funds to > go out and swap hardware on a whim like a company might. >There''s a limit to haw much even ZFS can do with bad disks, sure it can manage a failing mirror better than SVM or low end hardware RAID, but given the motley collection of drives in the OP''s system, there aren''t that many options. Given the silly prices of new drives (320GB are about the best $/GB), replacement is the best long term option. Otherwise, mirroring the largest two drives and discard the small one might be a good option. Ian
On Fri, 4 May 2007, mike wrote:> Isn''t the benefit of ZFS that it will allow you to use even the most > unreliable risks and be able to inform you when they are attempting to > corrupt your data?Yes - I won''t argue that ZFS can be applied exactly as you state above. However, ZFS is no substitute for bad practices that include: - not proactively replacing mechanical components *before* they fail - not having maintenance policies in place> To me it sounds like he is a SOHO user; may not have a lot of funds to > go out and swap hardware on a whim like a company might.You may be right - but you''re simply guessing. The original system probably cost around $3k (?? I could be wrong). So what I''m suggesting, that he spend ~ $300, represents ~ 10% of the original system cost. Since the OP asked for advice, I''ve given him the best advice I can come up with. I''ve also encountered many users who don''t keep up to date with current computer hardware capabilities and pricing, and who may be completely unaware that you can purchase two 500Gb disk drives, with a 5 year warranty, for around $300. And possibly less if you checkout Frys weekly bargin disk drive offers. Now consider the total cost of ownership solution I recommended: 500 gigabytes of storage, coupled with ZFS, which translates into $60/year for 5 years of error free storage capability. Can life get any better than this! :) Now contrast my recommendation with what you propose - re-targeting a bunch of older disk drives, which incorporate older, less reliable technology, with a view to saving money. How much is your time worth? How many hours will it take you to recover from a failure of one of these older drives and the accompying increased risk of data loss. If the ZFS savvy OP comes back to this list and says "Als'' solution is too expensive" I''m perfectly willing to rethink my recommendation. For now, I believe it to be the best recommendation I can devise.> ZFS in my opinion is well-suited for those without access to > continuously upgraded hardware and expensive fault-tolerant > hardware-based solutions. It is ideal for home installations where > people think their data is safe until the disk completely dies. I > don''t know how many non-savvy people I have helped over the years who > has no data protection, and ZFS could offer them at least some > fault-tolerance and protection against corruption, and could help > notify them when it is time to shut off their computer and call > someone to come swap out their disk and move their data to a fresh > drive before it''s completely failed...Agreed. One piece-of-the-puzzle that''s missing right now IMHO, is a reliable, two port, low-cost PCI SATA disk controller. A solid/de-bugged 3124 driver would go a long way to ZFS-enabling a bunch of cost-constrained ZFS users. And, while I''m working this hardware wish list, please ... a PCI-Express based version of the SuperMicro AOC-SAT2-MV8 8-port Marvell based disk controller card. Sun ... are you listening?> - mike > > > On 5/4/07, Al Hopper <al at logical-approach.com> wrote: > > On Fri, 4 May 2007, Lee Fyock wrote: > > > > > Hi-- > > > > > > I''m looking forward to using zfs on my Mac at some point. My desktop > > > server (a dual-1.25GHz G4) has a motley collection of discs that has > > > accreted over the years: internal EIDE 320GB (boot drive), internal > > > 250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive. > > > > > > My guess is that I won''t be able to use zfs on the boot 320 GB drive, > > > at least this year. I''d like to favor available space over > > > performance, and be able to swap out a failed drive without losing > > > any data. > > > > > > So, what''s the best zfs configuration in this situation? The FAQs > > > I''ve read are usually related to matched (in size) drives. > > > > Seriously, the best solution here is to discard any drive that is 3 years > > (or more) old[1] and purchase two new SATA 500Gb drives. Setup the new > > drives as a zfs mirror. Being a believer in diversity, I''d recommend the > > following two products (one of each): > > > > - Western Digital Caviar RE2 WD5000YS 500GB 7200 RPM 16MB Cache SATA > > 3.0Gb/s Hard Drive [2] > > - Seagate Barracuda 7200.10 (Perpendicular Recording) ST3500630AS 500GB > > 7200 RPM 16MB Cache SATA 3.0Gb/s Hard Drive > > > > Not being familiar with Macs - I''m not sure about your availability of > > SATA ports on the motherboard. > > > > [1] it continues to amaze me that many sites, large or small, don''t have a > > (written) policy for mechanical component replacement - whether disk > > drives or fans. > > [2] $151 at zipzoomfly.com > > [3] $130 at newegg.com > > > > Regards, > > > > Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com > > Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT > > OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 > > http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/ > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > >Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
On 5/4/07, Al Hopper <al at logical-approach.com> wrote:> Yes - I won''t argue that ZFS can be applied exactly as you state above. > However, ZFS is no substitute for bad practices that include: > > - not proactively replacing mechanical components *before* they fail > - not having maintenance policies in placeI mainly was speaking on behalf of the home users. If any data is important you obviously get what you pay for. However I think ZFS can help improve the integrity - perhaps you don''t know the disk is starting to fail until it has corrupted some data. If ZFS was in place, some if not all of the data would still have been safe. I replace my disks when they start to get corrupt, and I am still always nervous and have high-stress data moves off failing disks to the new ones/temporary storage. ZFS in my opinion is a proactive way to minimize data loss. It''s obviously not an excuse to let your hardware rot for years.> And, while I''m working this hardware wish list, please ... a PCI-Express > based version of the SuperMicro AOC-SAT2-MV8 8-port Marvell based disk > controller card. Sun ... are you listening?Yeah - I''ve got a wishlist too; port-multiplier friendly PCI-e adapters... Marvell or SI or anything as long as it''s PCI-e and has 4 or 5 eSATA ports that can work with a port multipler (for 4-5 disks per port) ... I don''t think there is a clear fully supported option yet or I''d be using it right now. - mike
I didn''t mean to kick up a fuss. I''m reasonably zfs-savvy in that I''ve been reading about it for a year or more. I''m a Mac developer and general geek; I''m excited about zfs because it''s new and cool. At some point I''ll replace my old desktop machine with something new and better -- probably when Unreal Tournament 2007 arrives, necessitating a faster processor and better graphics card. :-) In the mean time, I''d like to hang out with the system and drives I have. As "mike" said, my understanding is that zfs would provide error correction until a disc fails, if the setup is properly done. That''s the setup for which I''m requesting a recommendation. I won''t even be able to use zfs until Leopard arrives in October, but I want to bone up so I''ll be ready when it does. Money isn''t an issue here, but neither is creating an optimal zfs system. I''m curious what the right zfs configuration is for the system I have. Thanks! Lee On May 4, 2007, at 7:41 PM, Al Hopper wrote:> On Fri, 4 May 2007, mike wrote: > >> Isn''t the benefit of ZFS that it will allow you to use even the most >> unreliable risks and be able to inform you when they are >> attempting to >> corrupt your data? > > Yes - I won''t argue that ZFS can be applied exactly as you state > above. > However, ZFS is no substitute for bad practices that include: > > - not proactively replacing mechanical components *before* they fail > - not having maintenance policies in place > >> To me it sounds like he is a SOHO user; may not have a lot of >> funds to >> go out and swap hardware on a whim like a company might. > > You may be right - but you''re simply guessing. The original system > probably cost around $3k (?? I could be wrong). So what I''m > suggesting, > that he spend ~ $300, represents ~ 10% of the original system cost. > > Since the OP asked for advice, I''ve given him the best advice I can > come > up with. I''ve also encountered many users who don''t keep up to > date with > current computer hardware capabilities and pricing, and who may be > completely unaware that you can purchase two 500Gb disk drives, > with a 5 > year warranty, for around $300. And possibly less if you checkout > Frys > weekly bargin disk drive offers. > > Now consider the total cost of ownership solution I recommended: 500 > gigabytes of storage, coupled with ZFS, which translates into $60/ > year for > 5 years of error free storage capability. Can life get any better > than > this! :) > > Now contrast my recommendation with what you propose - re-targeting a > bunch of older disk drives, which incorporate older, less reliable > technology, with a view to saving money. How much is your time worth? > How many hours will it take you to recover from a failure of one of > these > older drives and the accompying increased risk of data loss. > > If the ZFS savvy OP comes back to this list and says "Als'' solution > is too > expensive" I''m perfectly willing to rethink my recommendation. For > now, I > believe it to be the best recommendation I can devise. > >> ZFS in my opinion is well-suited for those without access to >> continuously upgraded hardware and expensive fault-tolerant >> hardware-based solutions. It is ideal for home installations where >> people think their data is safe until the disk completely dies. I >> don''t know how many non-savvy people I have helped over the years who >> has no data protection, and ZFS could offer them at least some >> fault-tolerance and protection against corruption, and could help >> notify them when it is time to shut off their computer and call >> someone to come swap out their disk and move their data to a fresh >> drive before it''s completely failed... > > Agreed. > > One piece-of-the-puzzle that''s missing right now IMHO, is a reliable, > two port, low-cost PCI SATA disk controller. A solid/de-bugged 3124 > driver would go a long way to ZFS-enabling a bunch of cost- > constrained ZFS > users. > > And, while I''m working this hardware wish list, please ... a PCI- > Express > based version of the SuperMicro AOC-SAT2-MV8 8-port Marvell based disk > controller card. Sun ... are you listening? > > >> - mike >> >> >> On 5/4/07, Al Hopper <al at logical-approach.com> wrote: >>> On Fri, 4 May 2007, Lee Fyock wrote: >>> >>>> Hi-- >>>> >>>> I''m looking forward to using zfs on my Mac at some point. My >>>> desktop >>>> server (a dual-1.25GHz G4) has a motley collection of discs that >>>> has >>>> accreted over the years: internal EIDE 320GB (boot drive), internal >>>> 250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive. >>>> >>>> My guess is that I won''t be able to use zfs on the boot 320 GB >>>> drive, >>>> at least this year. I''d like to favor available space over >>>> performance, and be able to swap out a failed drive without losing >>>> any data. >>>> >>>> So, what''s the best zfs configuration in this situation? The FAQs >>>> I''ve read are usually related to matched (in size) drives. >>> >>> Seriously, the best solution here is to discard any drive that is >>> 3 years >>> (or more) old[1] and purchase two new SATA 500Gb drives. Setup >>> the new >>> drives as a zfs mirror. Being a believer in diversity, I''d >>> recommend the >>> following two products (one of each): >>> >>> - Western Digital Caviar RE2 WD5000YS 500GB 7200 RPM 16MB Cache SATA >>> 3.0Gb/s Hard Drive [2] >>> - Seagate Barracuda 7200.10 (Perpendicular Recording) ST3500630AS >>> 500GB >>> 7200 RPM 16MB Cache SATA 3.0Gb/s Hard Drive >>> >>> Not being familiar with Macs - I''m not sure about your >>> availability of >>> SATA ports on the motherboard. >>> >>> [1] it continues to amaze me that many sites, large or small, >>> don''t have a >>> (written) policy for mechanical component replacement - whether disk >>> drives or fans. >>> [2] $151 at zipzoomfly.com >>> [3] $130 at newegg.com >>> >>> Regards, >>> >>> Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com >>> Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT >>> OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 >>> http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/ >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >> > > Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com > Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT > OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 > http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
Lee Fyock wrote:> I didn''t mean to kick up a fuss. > > I''m reasonably zfs-savvy in that I''ve been reading about it for a year > or more. I''m a Mac developer and general geek; I''m excited about zfs > because it''s new and cool. > > At some point I''ll replace my old desktop machine with something new > and better -- probably when Unreal Tournament 2007 arrives, > necessitating a faster processor and better graphics card. :-) > > In the mean time, I''d like to hang out with the system and drives I > have. As "mike" said, my understanding is that zfs would provide error > correction until a disc fails, if the setup is properly done. That''s > the setup for which I''m requesting a recommendation. > > I won''t even be able to use zfs until Leopard arrives in October, but > I want to bone up so I''ll be ready when it does. > > Money isn''t an issue here, but neither is creating an optimal zfs > system. I''m curious what the right zfs configuration is for the system > I have. >Given the odd sizes of your drives, there might not be one, unless you are willing to sacrifice capacity. Ian
Al Hopper wrote:>On Fri, 4 May 2007, mike wrote: > > > >>Isn''t the benefit of ZFS that it will allow you to use even the most >>unreliable risks and be able to inform you when they are attempting to >>corrupt your data? >> >> > >Yes - I won''t argue that ZFS can be applied exactly as you state above. >However, ZFS is no substitute for bad practices that include: > >- not proactively replacing mechanical components *before* they fail > >There''s a nice side benefit from this one: - the piece of hardware you retire becomes a backup of "old data" When I ran lots of older SPARC boxes, I made a point of upgrading the disks, from 1GB to 4G to 9GB...it was''t for disk space but to put in place newer, quieter, faster, less power hungry drives and had the added benefit of ensuring that in 2004, the SCA SCSI drive in the SPARC 5 was made maybe 1 or 2 years ago, not 10 and thus also less likely to fail. I still try to do this with PC hard drives today, but sometimes they fail inside my replacement window :-( Darren
That''s a lot of talking without an answer :)> internal EIDE 320GB (boot drive), internal250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive.> So, what''s the best zfs configuration in this situation?RAIDZ uses disk space like RAID5. So the best you could do here for redundant space is (160 * 4 or 5)-160, and then use the remaining spaces as non-redundant or mirrored. If you want to play with opensolaris and zfs you can do so easily with a vmware or parallels virtual machine. It sounds like that is all you want to do right now. This message posted from opensolaris.org
On 5-May-07, at 2:07 AM, MC wrote:> That''s a lot of talking without an answer :) > >> internal EIDE 320GB (boot drive), internal > 250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive. > >> So, what''s the best zfs configuration in this situation? > > RAIDZ uses disk space like RAID5. So the best you could do here > for redundant space is (160 * 4 or 5)-160, and then use the > remaining spaces as non-redundant or mirrored. > > If you want to play with opensolaris and zfs you can do so easily > with a vmware or parallels virtual machine.He can''t, on the hardware in question: the machine is a G4. Lee is apparently anticipating the integration of ZFS with OS X 10.5. I would agree that, while he waits, he should rustle up a spare PC and install Solaris. --Toby> It sounds like that is all you want to do right now. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Lee Fyock wrote:> least this year. I''d like to favor available space over performance, and > be able to swap out a failed drive without losing any data.Lee Fyock later wrote:> In the mean time, I''d like to hang out with the system and drives I > have. As "mike" said, my understanding is that zfs would provide > error correction until a disc fails, if the setup is properly done. > That''s the setup for which I''m requesting a recommendation.ZFS always lets you know if the data you are requesting has gone bad. If you have redundancy, it provides error correction as well.> Money isn''t an issue here, but neither is creating an optimal zfs > system. I''m curious what the right zfs configuration is for the > system I have.You obviously have the option of having a giant pool of all the disks and what you get is dynamic striping. But if a disk goes toast, the data in it is gone. If you plan to back up important data elsewhere and data loss is something you can live with, this might be a good choice. The next option is to mirror (/raidz) disks. If you mirror a 200 GB disk with a 250 GB one, you will get only 200 GB of redundant storage. If a disk goes for a toss, all of your data is safe. But you lose disk space. Mirroring the 600GB disk with a stripe of 160+200+250 would have been nice, but I believe this is not possible with ZFS (yet?). There is a third option - create a giant pool of all the disks. Set copy=2. ZFS will create two copies of all the data blocks. That is pretty good redundancy. But depending on how full your disks are, the copies may or may not be on different disks. In other words, this does not guarantee that *all* of your data is safe, if say your 600 GB disk dies. But it might be ''good enough''. From what I understand your requirements are, this just might be your best choice. A periodic scrub would also be a good thing to do. The earlier you detect a flaky disk, the better it is... Hope this helps. -Manoj
> Given the odd sizes of your drives, there might not > be one, unless you > are willing to sacrifice capacity.I think for the SoHo and home user scenarios, I think it might be of advantage if the disk drivers offer unified APIs to read out and interpret disk drive diagnostics, like SMART on ATA and whatever there''s for SCSI/SAS, so that ZFS can react on it. Be it automatically invoking spare discs or showing warnings in the pool status. Or even automatically evacuating the device (given that ZFS will support it at some point) depending on the severity, should there be enough space on the other disks. For instance going top to bottom through the filesystems by importance, which would however an importance attribute. -mg This message posted from opensolaris.org
Hi Lee, You can decide whether you want to use ZFS for a root file system now. You can find this info here: http://opensolaris.org/os/community/zfs/boot/ Consider this setup for your other disks, which are: 250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive 250GB = disk1 200GB = disk2 160GB = disk3 600GB = disk4 (spare) I include a spare in this setup because you want to be protected from a disk failure. Since the replacement disk must be equal to or larger than the disk to replace, I think this is best (safest) solution. zpool create pool raidz disk1 disk2 disk3 spare disk4 This setup provides less capacity but better safety, which is probably important for older disks. Because of the spare disk requirement (must be equal to or larger in size), I don''t see a better arrangement. I hope someone else can provide one. Your questions remind me that I need to provide add''l information about the current ZFS spare feature... Thanks, Cindy Lee Fyock wrote:> I didn''t mean to kick up a fuss. > > I''m reasonably zfs-savvy in that I''ve been reading about it for a year > or more. I''m a Mac developer and general geek; I''m excited about zfs > because it''s new and cool. > > At some point I''ll replace my old desktop machine with something new > and better -- probably when Unreal Tournament 2007 arrives, > necessitating a faster processor and better graphics card. :-) > > In the mean time, I''d like to hang out with the system and drives I > have. As "mike" said, my understanding is that zfs would provide error > correction until a disc fails, if the setup is properly done. That''s > the setup for which I''m requesting a recommendation. > > I won''t even be able to use zfs until Leopard arrives in October, but I > want to bone up so I''ll be ready when it does. > > Money isn''t an issue here, but neither is creating an optimal zfs > system. I''m curious what the right zfs configuration is for the system > I have. > > Thanks! > Lee > > On May 4, 2007, at 7:41 PM, Al Hopper wrote: > >> On Fri, 4 May 2007, mike wrote: >> >>> Isn''t the benefit of ZFS that it will allow you to use even the most >>> unreliable risks and be able to inform you when they are attempting to >>> corrupt your data? >> >> >> Yes - I won''t argue that ZFS can be applied exactly as you state above. >> However, ZFS is no substitute for bad practices that include: >> >> - not proactively replacing mechanical components *before* they fail >> - not having maintenance policies in place >> >>> To me it sounds like he is a SOHO user; may not have a lot of funds to >>> go out and swap hardware on a whim like a company might. >> >> >> You may be right - but you''re simply guessing. The original system >> probably cost around $3k (?? I could be wrong). So what I''m suggesting, >> that he spend ~ $300, represents ~ 10% of the original system cost. >> >> Since the OP asked for advice, I''ve given him the best advice I can come >> up with. I''ve also encountered many users who don''t keep up to date >> with >> current computer hardware capabilities and pricing, and who may be >> completely unaware that you can purchase two 500Gb disk drives, with a 5 >> year warranty, for around $300. And possibly less if you checkout Frys >> weekly bargin disk drive offers. >> >> Now consider the total cost of ownership solution I recommended: 500 >> gigabytes of storage, coupled with ZFS, which translates into $60/ >> year for >> 5 years of error free storage capability. Can life get any better than >> this! :) >> >> Now contrast my recommendation with what you propose - re-targeting a >> bunch of older disk drives, which incorporate older, less reliable >> technology, with a view to saving money. How much is your time worth? >> How many hours will it take you to recover from a failure of one of >> these >> older drives and the accompying increased risk of data loss. >> >> If the ZFS savvy OP comes back to this list and says "Als'' solution >> is too >> expensive" I''m perfectly willing to rethink my recommendation. For >> now, I >> believe it to be the best recommendation I can devise. >> >>> ZFS in my opinion is well-suited for those without access to >>> continuously upgraded hardware and expensive fault-tolerant >>> hardware-based solutions. It is ideal for home installations where >>> people think their data is safe until the disk completely dies. I >>> don''t know how many non-savvy people I have helped over the years who >>> has no data protection, and ZFS could offer them at least some >>> fault-tolerance and protection against corruption, and could help >>> notify them when it is time to shut off their computer and call >>> someone to come swap out their disk and move their data to a fresh >>> drive before it''s completely failed... >> >> >> Agreed. >> >> One piece-of-the-puzzle that''s missing right now IMHO, is a reliable, >> two port, low-cost PCI SATA disk controller. A solid/de-bugged 3124 >> driver would go a long way to ZFS-enabling a bunch of cost- >> constrained ZFS >> users. >> >> And, while I''m working this hardware wish list, please ... a PCI- Express >> based version of the SuperMicro AOC-SAT2-MV8 8-port Marvell based disk >> controller card. Sun ... are you listening? >> >> >>> - mike >>> >>> >>> On 5/4/07, Al Hopper <al at logical-approach.com> wrote: >>> >>>> On Fri, 4 May 2007, Lee Fyock wrote: >>>> >>>>> Hi-- >>>>> >>>>> I''m looking forward to using zfs on my Mac at some point. My desktop >>>>> server (a dual-1.25GHz G4) has a motley collection of discs that has >>>>> accreted over the years: internal EIDE 320GB (boot drive), internal >>>>> 250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive. >>>>> >>>>> My guess is that I won''t be able to use zfs on the boot 320 GB drive, >>>>> at least this year. I''d like to favor available space over >>>>> performance, and be able to swap out a failed drive without losing >>>>> any data. >>>>> >>>>> So, what''s the best zfs configuration in this situation? The FAQs >>>>> I''ve read are usually related to matched (in size) drives. >>>> >>>> >>>> Seriously, the best solution here is to discard any drive that is 3 >>>> years >>>> (or more) old[1] and purchase two new SATA 500Gb drives. Setup the >>>> new >>>> drives as a zfs mirror. Being a believer in diversity, I''d >>>> recommend the >>>> following two products (one of each): >>>> >>>> - Western Digital Caviar RE2 WD5000YS 500GB 7200 RPM 16MB Cache SATA >>>> 3.0Gb/s Hard Drive [2] >>>> - Seagate Barracuda 7200.10 (Perpendicular Recording) ST3500630AS >>>> 500GB >>>> 7200 RPM 16MB Cache SATA 3.0Gb/s Hard Drive >>>> >>>> Not being familiar with Macs - I''m not sure about your availability of >>>> SATA ports on the motherboard. >>>> >>>> [1] it continues to amaze me that many sites, large or small, don''t >>>> have a >>>> (written) policy for mechanical component replacement - whether disk >>>> drives or fans. >>>> [2] $151 at zipzoomfly.com >>>> [3] $130 at newegg.com >>>> >>>> Regards, >>>> >>>> Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com >>>> Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT >>>> OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 >>>> http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/ >>>> _______________________________________________ >>>> zfs-discuss mailing list >>>> zfs-discuss at opensolaris.org >>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>>> >>> >> >> Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com >> Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT >> OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 >> http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/ > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Cindy, Thanks so much for the response -- this is the first one that I consider an actual answer. :-) I''m still unclear on exactly what I end up with. I apologize in advance for my ignorance -- the ZFS admin guide assumes knowledge that I don''t yet have. I assume that disk4 is a hot spare, so if one of the other disks die, it''ll kick into active use. Is data immediately replicated from the other surviving disks to disk4? What usable capacity do I end up with? 160 GB (the smallest disk) * 3? Or less, because raidz has parity overhead? Or more, because that overhead can be stored on the larger disks? If I didn''t need a hot spare, but instead could live with running out and buying a new drive to add on as soon as one fails, what configuration would I use then? Thanks! Lee On May 7, 2007, at 2:44 PM, Cindy.Swearingen at Sun.COM wrote:> Hi Lee, > > You can decide whether you want to use ZFS for a root file system now. > You can find this info here: > > http://opensolaris.org/os/community/zfs/boot/ > > Consider this setup for your other disks, which are: > > 250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive > > 250GB = disk1 > 200GB = disk2 > 160GB = disk3 > 600GB = disk4 (spare) > > I include a spare in this setup because you want to be protected > from a disk failure. Since the replacement disk must be equal to or > larger than > the disk to replace, I think this is best (safest) solution. > > zpool create pool raidz disk1 disk2 disk3 spare disk4 > > This setup provides less capacity but better safety, which is probably > important for older disks. Because of the spare disk requirement (must > be equal to or larger in size), I don''t see a better arrangement. I > hope someone else can provide one. > > Your questions remind me that I need to provide add''l information > about > the current ZFS spare feature... > > Thanks, > > Cindy >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070507/41bfef4a/attachment.html>
On 7-May-07, at 3:44 PM, Cindy.Swearingen at Sun.COM wrote:> Hi Lee, > > You can decide whether you want to use ZFS for a root file system now. > You can find this info here: > > http://opensolaris.org/os/community/zfs/boot/Bearing in mind that his machine is a G4 PowerPC. When Solaris 10 is ported to this platform, please let me know, too. --Toby
Toby Thain wrote:> > On 7-May-07, at 3:44 PM, Cindy.Swearingen at Sun.COM wrote: > >> Hi Lee, >> >> You can decide whether you want to use ZFS for a root file system now. >> You can find this info here: >> >> http://opensolaris.org/os/community/zfs/boot/ > > Bearing in mind that his machine is a G4 PowerPC. When Solaris 10 is > ported to this platform, please let me know, too.For Solaris on PowerPC, it''s probably easiest to just monitor this project: http://www.opensolaris.org/os/community/power_pc/ -Luke -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3271 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070507/8802f140/attachment.bin>
Andy Lubel
2007-May-07 20:27 UTC
[zfs-discuss] Motley group of discs? (doing it right, or right now)
I think it will be in the next.next (10.6) OSX, we just need to get apple to stop playing with their silly cell phone (that I cant help but want, damn them!). I have similar situation at home, but what I do is use Solaris 10 on a cheapish x86 box with 6 400gb IDE/SATA disks, I then make them into ISCSI targets and use that free GlobalSAN initiator (LOL at Atto). I once was like you, had 5 USB/Firewire drives hanging off everything and eventually I just got fed up with the mess of cables and wall warts. Perhaps my method of putting redundant and fast storage isn''t as easy to achieve to everyone else. If you want more details about my setup, just email me directly, I don''t mind :) -Andy On 5/7/07 4:48 PM, "Cindy.Swearingen at Sun.COM" <Cindy.Swearingen at Sun.COM> wrote:> Lee, > > Yes, the hot spare (disk4) should kick if another disk in the pool fails > and yes, the data is moved to disk4. > > You are correct: > > 160 GB (the smallest disk) * 3 + raidz parity info > > Here''s the size of raidz pool comprised of 3 136-GB disks: > > # zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > pool 408G 98K 408G 0% ONLINE - > # zfs list > NAME USED AVAIL REFER MOUNTPOINT > pool 89.9K 267G 32.6K /pool > > The pool is 408GB in size but usable space in the pool is 267GB. > > If you added the 600GB disk to the pool, then you''ll still lose out > on the extra capacity because of the smaller disks, which is why > I suggested using it as a spare. > > Regarding this: > > If I didn''t need a hot spare, but instead could live with running out > and buying a new drive to add on as soon as one fails, what > configuration would I use then? > > I don''t have any add''l ideas but I still recommend going with a spare. > > Cindy > > > > > > Lee Fyock wrote: >> Cindy, >> >> Thanks so much for the response -- this is the first one that I consider >> an actual answer. :-) >> >> I''m still unclear on exactly what I end up with. I apologize in advance >> for my ignorance -- the ZFS admin guide assumes knowledge that I don''t >> yet have. >> >> I assume that disk4 is a hot spare, so if one of the other disks die, >> it''ll kick into active use. Is data immediately replicated from the >> other surviving disks to disk4? >> >> What usable capacity do I end up with? 160 GB (the smallest disk) * 3? >> Or less, because raidz has parity overhead? Or more, because that >> overhead can be stored on the larger disks? >> >> If I didn''t need a hot spare, but instead could live with running out >> and buying a new drive to add on as soon as one fails, what >> configuration would I use then? >> >> Thanks! >> Lee >> >> On May 7, 2007, at 2:44 PM, Cindy.Swearingen at Sun.COM >> <mailto:Cindy.Swearingen at Sun.COM> wrote: >> >>> Hi Lee, >>> >>> >>> You can decide whether you want to use ZFS for a root file system now. >>> >>> You can find this info here: >>> >>> >>> http://opensolaris.org/os/community/zfs/boot/ >>> >>> >>> Consider this setup for your other disks, which are: >>> >>> >>> 250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive >>> >>> >>> 250GB = disk1 >>> >>> 200GB = disk2 >>> >>> 160GB = disk3 >>> >>> 600GB = disk4 (spare) >>> >>> >>> I include a spare in this setup because you want to be protected from >>> a disk failure. Since the replacement disk must be equal to or larger than >>> >>> the disk to replace, I think this is best (safest) solution. >>> >>> >>> zpool create pool raidz disk1 disk2 disk3 spare disk4 >>> >>> >>> This setup provides less capacity but better safety, which is probably >>> >>> important for older disks. Because of the spare disk requirement (must >>> >>> be equal to or larger in size), I don''t see a better arrangement. I >>> >>> hope someone else can provide one. >>> >>> >>> Your questions remind me that I need to provide add''l information about >>> >>> the current ZFS spare feature... >>> >>> >>> Thanks, >>> >>> >>> Cindy >>> >>> >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss--
Lee, Yes, the hot spare (disk4) should kick if another disk in the pool fails and yes, the data is moved to disk4. You are correct: 160 GB (the smallest disk) * 3 + raidz parity info Here''s the size of raidz pool comprised of 3 136-GB disks: # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 408G 98K 408G 0% ONLINE - # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 89.9K 267G 32.6K /pool The pool is 408GB in size but usable space in the pool is 267GB. If you added the 600GB disk to the pool, then you''ll still lose out on the extra capacity because of the smaller disks, which is why I suggested using it as a spare. Regarding this: If I didn''t need a hot spare, but instead could live with running out and buying a new drive to add on as soon as one fails, what configuration would I use then? I don''t have any add''l ideas but I still recommend going with a spare. Cindy Lee Fyock wrote:> Cindy, > > Thanks so much for the response -- this is the first one that I consider > an actual answer. :-) > > I''m still unclear on exactly what I end up with. I apologize in advance > for my ignorance -- the ZFS admin guide assumes knowledge that I don''t > yet have. > > I assume that disk4 is a hot spare, so if one of the other disks die, > it''ll kick into active use. Is data immediately replicated from the > other surviving disks to disk4? > > What usable capacity do I end up with? 160 GB (the smallest disk) * 3? > Or less, because raidz has parity overhead? Or more, because that > overhead can be stored on the larger disks? > > If I didn''t need a hot spare, but instead could live with running out > and buying a new drive to add on as soon as one fails, what > configuration would I use then? > > Thanks! > Lee > > On May 7, 2007, at 2:44 PM, Cindy.Swearingen at Sun.COM > <mailto:Cindy.Swearingen at Sun.COM> wrote: > >> Hi Lee, >> >> >> You can decide whether you want to use ZFS for a root file system now. >> >> You can find this info here: >> >> >> http://opensolaris.org/os/community/zfs/boot/ >> >> >> Consider this setup for your other disks, which are: >> >> >> 250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive >> >> >> 250GB = disk1 >> >> 200GB = disk2 >> >> 160GB = disk3 >> >> 600GB = disk4 (spare) >> >> >> I include a spare in this setup because you want to be protected from >> a disk failure. Since the replacement disk must be equal to or larger than >> >> the disk to replace, I think this is best (safest) solution. >> >> >> zpool create pool raidz disk1 disk2 disk3 spare disk4 >> >> >> This setup provides less capacity but better safety, which is probably >> >> important for older disks. Because of the spare disk requirement (must >> >> be equal to or larger in size), I don''t see a better arrangement. I >> >> hope someone else can provide one. >> >> >> Your questions remind me that I need to provide add''l information about >> >> the current ZFS spare feature... >> >> >> Thanks, >> >> >> Cindy >> >> > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Toby Thain
2007-May-07 23:12 UTC
[zfs-discuss] Motley group of discs? (doing it right, or right now)
On 7-May-07, at 5:27 PM, Andy Lubel wrote:> I think it will be in the next.next (10.6) OSX,<baselessSpeculation> Well, the iPhone forced a few months schedule slip, perhaps *instead of* dropping features? </baselessSpeculation> Mind you I wouldn''t be particularly surprised if ZFS wasn''t in 10.5. Just so long as we get it eventually :-) ***suppresses giggle at MS whose schedule slipped years AND dropped any interesting features***
Well since we are talking about for home use, I never tried as a spare, but if you want to get real nutty, do the setup cindys suggested but format the 600GB drive as UFS or some other filesystem and then try and create a 250GB file device as a spare on that UFS drive. it will give you redundancy and not waste all the space on the 600GB drive. Zfs allows the use of file devices instead of hardware devices "zfs create test /tmp/testfiledevice" as an example If you do it, let us know how it goes :) This message posted from opensolaris.org
Bryan Wagoner wrote:> Well since we are talking about for home use, I never tried as a spare, but if you want to get real nutty, do the setup cindys suggested but format the 600GB drive as UFS or some other filesystem and then try and create a 250GB file device as a spare on that UFS drive. it will give you redundancy and not waste all the space on the 600GB drive. > > Zfs allows the use of file devices instead of hardware devices "zfs create test /tmp/testfiledevice" as an exampleHowever, I do not believe it is safe to use files under UFS as ZFS vdevs. ZFS expects data to be flushed and, IIRC, UFS does not guarantee that for regular files. Search the archives for more info. That said, you can certainly divide the 600 GByte disk into 3 slices. Later, you can always replace a slice with a different, bigger slice to grow. -- richard
If you were really worried about it you could mount forcedirect IO or something, however, if you read the post, I mentioned to do it as a spare. A spare isn''t active until there''s a problem, so you''d only be running with the filedevice temporarily in theory anyway. This message posted from opensolaris.org