Slack-Moehrle
2010-Mar-08 02:09 UTC
[zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?
Hello All, I build a new Storage Server to backup my data, keep archives of client files, etc I recently had a near loss of important items. So I built a 16 SATA bay enclosure (16 hot swappable + 3 internal) enclosure, 2 x 3Ware 8 port RAID cards, 8gb RAM, dual AMD Opertron. I have a 1tb boot drive and I put in 8 x 1.5tb Seagate 7200 drives. In the future I want to fill the other 8 SATA bays with 2tb drives. I dont have a lot of experience with ZFS, but it was my first thought for handing my data. In the past I have used a Linux software RAID. OpenSolaris or FreeBSD with ZFS? I would probably have questions, is this a place to ask? Can anyone provide advice, thoughts, etc? Best, -Jason
Michael Shadle
2010-Mar-08 02:12 UTC
[zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?
On Sun, Mar 7, 2010 at 6:09 PM, Slack-Moehrle <mailinglists at mailnewsrss.com> wrote:> OpenSolaris or FreeBSD with ZFS?zfs for sure. it''s nice having something bitrot-resistant. it was designed with data integrity in mind.
David Dyer-Bennet
2010-Mar-08 02:40 UTC
[zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?
On 3/7/2010 8:09 PM, Slack-Moehrle wrote:> I build a new Storage Server to backup my data, keep archives of client files, etc I recently had a near loss of important items. > > So I built a 16 SATA bay enclosure (16 hot swappable + 3 internal) enclosure, 2 x 3Ware 8 port RAID cards, 8gb RAM, dual AMD Opertron. > > I have a 1tb boot drive and I put in 8 x 1.5tb Seagate 7200 drives. In the future I want to fill the other 8 SATA bays with 2tb drives. > > I dont have a lot of experience with ZFS, but it was my first thought for handing my data. In the past I have used a Linux software RAID. > > OpenSolaris or FreeBSD with ZFS? >From everything I hear, the OpenSolaris way is still considerably more solid. I''m running OpenSolaris myself so my info on the FreeBSD side is second-hand, though.> I would probably have questions, is this a place to ask? >This is officially a place to discuss ZFS, which we do interpret to include asking questions, yes :-). They''ve been very very helpful to me, since I made this decision back in 2006. The downside of OpenSolaris is it doesn''t have as broad hardware support as other choices. The most common places where this comes up, sound, video, and wireless network, won''t matter to you I don''t think for this project. I don''t know off-hand if you disk controllers are supported. There''s a hardware compatibility list that''ll roughly tell you, or maybe somebody here who understands the disk issues well can just tell you. If OpenSolaris is new to you (as it was to me in 2006), it''s a learning curve. My experience with Linux (and with SunOS, back before it became Solaris) was in some ways a problem to be overcome -- enough stuff is different that I kept running into stuff I thought I knew that was different (especially service management, and finding log files). But it''s well-documented, and this and other mailing lists are full of helpful experts. What brought me to ZFS was the fact that it used its own block checksums, and verified them on each read; and has the ability to "scrub" in the background, to go read and verify all the used blocks. I consider this very important for long-term archiving of data (which is what I''m doing on mine, and it sounds like you will be on your). Also the fact that the basic on-disk structure is built on transactional integrity mechanisms. Also the fact that I could expand a pool (though not a RAID group) from day 1; something not available to me in any affordable consumer solution then (it''s somewhat better now, with things like Drobos, if you''re happy with proprietary formats). Did you pick the chassis and disk size based on planned storage requirements, or because it''s what you could get to build a big honking fileserver box? Just curious. Mine is much smaller, 8 3.5" hot-swap bays (plus I recently added 4 hot-swap 2.5" bays and moved the boot disks up there, so all 8 of the 3.5" bays are now available for data disks), and I''ve got three mirrored pairs of 400GB disks currently. I just upgraded from 2 pair. I do quite a lot of digital photography, that''s the majority of the data. There''s a best-practices FAQ at <http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide> which is well-thought-of by people here. For a system where you care about capacity and safety, but not that much about IO throughput (that''s my interpretation of what you said you would use it for), with 16 bays, I believe the expert opinion will tell you that two RAIDZ2 groups of 8 disks each is one of the better ways to go. With disks that big (you''re talking 1.5TB and up), if one disk fails, it takes a LONG time for the "resilver" operation to complete, and during that time in a singly-redundant group you''re now vulnerable to a single failure (having already lost your redundancy). AND the disks are being unusually stressed, precisely by the resilver operation on top of normal uses. AND it''s not nearly uncommon enough for batches of disks to go out together all with the same flaw. So a singly-redundant 8-drive group of large drives is thought to be very risky by many people here; people prefer double redundancy in groups that big with large drives. These days everybody is all excited about clever ways you can use SSDs with ZFS (as read cache, and as intent log), but those are all about raising IO throughput, and probably won''t be important to what you''re doing. -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
Ian Collins
2010-Mar-08 02:47 UTC
[zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?
David Dyer-Bennet wrote:> > > For a system where you care about capacity and safety, but not that > much about IO throughput (that''s my interpretation of what you said > you would use it for), with 16 bays, I believe the expert opinion will > tell you that two RAIDZ2 groups of 8 disks each is one of the better > ways to go. With disks that big (you''re talking 1.5TB and up), if one > disk fails, it takes a LONG time for the "resilver" operation to > complete, and during that time in a singly-redundant group you''re now > vulnerable to a single failure (having already lost your redundancy). > AND the disks are being unusually stressed, precisely by the resilver > operation on top of normal uses. AND it''s not nearly uncommon enough > for batches of disks to go out together all with the same flaw. So a > singly-redundant 8-drive group of large drives is thought to be very > risky by many people here; people prefer double redundancy in groups > that big with large drives. >Or even triple parity with 2TB drives, see http://blogs.sun.com/ahl/entry/acm_triple_parity_raid -- Ian.
Slack-Moehrle
2010-Mar-08 03:18 UTC
[zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?
Hi David,>Did you pick the chassis and disk size based on planned storage >requirements, or because it''s what you could get to build a big honking >fileserver box? Just curious.I have a 4tb Buffalo Terastation that cannot be expanded further and I am using 2.7tb. Also, I have need to make sure e-mail, websites, etc are routinely backed up. I chose 1.5tb drives as they were the best bang for the buck as I got them all on sale. 4 for $105 each and 4 of them for a whopping $50 each. All Seagate 7200RPM. I chose the Chenbro 16-bay chasis as it allowed me to expand to a reasonable RAID past the Terastation and also allowed me to to add more drives in the future.>For a system where you care about capacity and safety, but not that much >about IO throughput (that''s my interpretation of what you said you would >use it for), with 16 bays, I believe the expert opinion will tell you >that two RAIDZ2 groups of 8 disks each is one of the better ways to >go.Yup that is what I am doing. I have two 3ware RAID card (each 8 SATA ports) and I have the 8 1.5tb drives to start with. I downloaded OpenSolaris today, I will get it installed tomorrow and see how I fair. Thanks for the link the ZFS Best Practices. Do you have any thoughts on implementation? I think I would just like to put my Home directory on the ZFS pool and just SCP files up as needed. I dont think I need to mount drives on my mac, etc. SCP seems to suite me. Best, -Jason
Ian Collins
2010-Mar-08 03:48 UTC
[zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?
Slack-Moehrle wrote:> > Do you have any thoughts on implementation? I think I would just like to put my Home directory on the ZFS pool and just SCP files up as needed. I dont think I need to mount drives on my mac, etc. SCP seems to suite me.One important point to note is you can only boot off a simple vdev (either a single drive or a mirror). scp will work well, or you can export individual ZFS filesystems using the SMB or NFS protocol so they can be mounted on PCs and macs. -- Ian.
Erik Trimble
2010-Mar-08 03:51 UTC
[zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?
Be sure to read the 3Ware info on their controllers under OpenSolaris: http://www.3ware.com/kb/article.aspx?id=15643 That said, 3ware controllers are hardly the best option for a OpenSolaris server. You DON''T want to make use of any of the hardware raid features of them, and you may not even make good use of the on-board cache. Frankly, if it''s still an option, I''d return the 3Ware controllers and go out and get either a PCI-E LSI HBA (not a RAID controller), or the old trusty PCI-X Supermicro AOC-SAT2-MV8. You''ll save some cash, and get a better-supported HBA in both cases. Also, if you can, use 2 drives for a mirrored boot/root setup. They don''t have to be big; I use mirrored 100GB laptop drives, and that works quite well. Be aware that you /will not/ get support for low-power modes in anything except the later-generation Opterons - essentially, only quad- or six-core versions have their power-saving features supported. And, as the Best Practices doc points out, there is Never Too Much RAM. -Erik Slack-Moehrle wrote:> Hello All, > > I build a new Storage Server to backup my data, keep archives of client files, etc I recently had a near loss of important items. > > So I built a 16 SATA bay enclosure (16 hot swappable + 3 internal) enclosure, 2 x 3Ware 8 port RAID cards, 8gb RAM, dual AMD Opertron. > > I have a 1tb boot drive and I put in 8 x 1.5tb Seagate 7200 drives. In the future I want to fill the other 8 SATA bays with 2tb drives. > > I dont have a lot of experience with ZFS, but it was my first thought for handing my data. In the past I have used a Linux software RAID. > > OpenSolaris or FreeBSD with ZFS? > > I would probably have questions, is this a place to ask? > > Can anyone provide advice, thoughts, etc? > > Best, > -Jason > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
Slack-Moehrle
2010-Mar-08 04:22 UTC
[zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?
Hi Erik,>>Be sure to read the 3Ware info on their controllers under OpenSolaris: >>http://www.3ware.com/kb/article.aspx?id=15643>>That said, 3ware controllers are hardly the best option for a >>OpenSolaris server. You DON''T want to make use of any of the hardware >>raid features of them, and you may not even make good use of the >>on-board cache.I wasn''t planning on using the 3Ware hardware RAID as I read that software RAID would be the way to go, I just have the cards for the fact I can plug 8 drives into each. I can''t return, I got them used from a guy who did not know what they were used for for $50 each. Do I have to do anything special to NOT use the 3ware hardware functionality? -Jason
Dedhi Sujatmiko
2010-Mar-08 05:05 UTC
[zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?
On Monday 08,March,2010 10:09 AM, Slack-Moehrle wrote:> > OpenSolaris or FreeBSD with ZFS? >I am also having some NAS storage at home, and consists of : a. OpenSolaris booted from hard disk with ZFS, mostly doing NFS and iSCSI Target for VMWare ESX, Intel Core Duo proc+ICH7 controller b. EON storage (which is miniaturized OpenSolaris) booted from USB with newest ZFS + dedup, shared using CIFS, Intel Atom with onboard SATA c. EON storage (which is miniaturized OpenSolaris) booted from CF Card with newest ZFS + dedup, shared using Samba, Intel Atom with onboard SATA d. FreeNAS 7.1(which is appliance based FreeBSD 7.2) booted from CF card, shared using Samba, AMD Sempron with 2 x SIL3114 e. FreeNAS 7RC2 (which is appliance based FreeBSD 7.2) booted from CF card, shared using Samba, Intel Atom with 1 x SIL3114 From my experience : 1. OpenSolaris implementation of CIFS is much more speedier than Samba. All the servers above are using 2-4 GB RAM. If the sama data (mostly JPEG picture and AVI files) being shared to Windows or Linux client, the CIFS server present the directory faster. Slideshow or jumping the time in the movie also smoother for CIFS 2. OpenSolaris (and EON) does not have proper implementation of SMART monitoring. Therefore I cannot get to know the temperature of my hard disks. Since they are DIY storage without chassis environment monitoring, I consider this an important regression 3. OpenSolaris (and EON) does not have proper serial number display of the Seagate hard disks I am using. If I use the format to read the serial number, I always miss the last character. If I read them using the "hd" or "hdparm" utility, I will miss the first character 4. This still cannot be proofed rationally, but I have a hunch that OpenSolaris is more picky compared to FreeBSD. For the multi ports SATA controller, I am using Silicon Image SIL3114 which is known to have a history of flaky controller. If I run them under OpenSolaris (or EON) then I will have many data corruption problem or disk not attending to request. However if I run the FreeNAS, I do not see any problem. That is why the SIL3114 always paired with FreeNAS. I guess may be it is also the SIL3114 driver also. Overall, I am happy using ZFS. Previously I use the FreeBSD vinum and Linux mdraid, always got a data corruption and unexpected behaviour during disk problem or replacement. Hope that helps, Dedhi
rwalists at washdcmail.com
2010-Mar-08 05:11 UTC
[zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?
On Mar 8, 2010, at 12:05 AM, Dedhi Sujatmiko wrote:> 2. OpenSolaris (and EON) does not have proper implementation of SMART monitoring. Therefore I cannot get to know the temperature of my hard disks. Since they are DIY storage without chassis environment monitoring, I consider this an important regression > 3. OpenSolaris (and EON) does not have proper serial number display of the Seagate hard disks I am using. If I use the format to read the serial number, I always miss the last character. If I read them using the "hd" or "hdparm" utility, I will miss the first characterBoth of these can be handled via smartctl (http://smartmontools.sourceforge.net/) as described here: http://breden.org.uk/2008/05/16/home-fileserver-drive-temps/ As to the serial number, at least for Western Digital drives it was accurate. Good luck, Ware
Erik Trimble
2010-Mar-08 07:59 UTC
[zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?
Slack-Moehrle wrote:> Hi Erik, > > I wasn''t planning on using the 3Ware hardware RAID as I read that software RAID would be the way to go, I just have the cards for the fact I can plug 8 drives into each. I can''t return, I got them used from a guy who did not know what they were used for for $50 each. > > Do I have to do anything special to NOT use the 3ware hardware functionality? > > -Jason >It''s been awhile since I used one, but IIRC, without any special configuration, they present all drives to the OS in a JBOD-like mode, which is exactly what you want. If that''s not what''s happening, then you''ll need to configure the controllers so that each disk is presented as a separate device (likely by making each disk a member of a 1-disk striped volume). (of course, you can always eBay them and get the cash for what they''re /really/ worth.) <wink> -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
tomwaters
2010-Mar-08 10:42 UTC
[zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?
Hi Jason, I spent months trying different O/S''s for my server and finally settled on opensolaris. The o/s is just as easy to install/learn or use than any of the Linux variants...and ZFS beats mdadm hands down. I had a server up and sharing files in under an hour. Just do it - (you''ll know soon enough if the hardware is going to support opensolaris). If you are unsure, start by installing it to a virtual machine (ie. VirtualBox) and have a play. It''s a very helpful community here and you''ll get all the support you''ll need. I found these links and they may help get you started... [url="http://blogs.sun.com/icedawn/entry/bondin"]SETTING UP AN OPENSOLARIS NAS BOX: FATHER-SON BONDING[/url] - a very simple/easy guide..all you''ll needed to get a ZFS server up and running. [url="http://flux.org.uk/howto/solaris/zfs_tutorial_01"]zfs tutorial part 1[/url] [url="http://malsserver.blogspot.com/2008/08/setting-up-static-network-configuration.html"]Setting up a static network configuration with NWAM[/url] [url="http://forums.opensolaris.com/message.jspa?messageID=1125"]How do you configure OpenSolaris to automatically login a user?[/url] [url="http://wikis.sun.com/display/OpenSolarisInfo/How+to+Set+Up+Samba+in+the+OpenSolaris+2009.06+Release"]How to Set Up Samba in the OpenSolaris 2009.06 Release - Information Resources - wikis.sun.com[/url] [url="http://wiki.genunix.org/wiki/index.php/Getting_Started_With_the_Solaris_CIFS_Service"]Getting Started With the Solaris CIFS Service[/url] [url="http://blogs.sun.com/pradhap/entry/mount_ntfs_ext2_ext3_in"]Mount NTFS / Ext2 / Ext3 / FAT 16 / FAT 32 in Solaris[/url] and also [url="http://www.iiitmk.ac.in/wiki/index.php/How_to_Mount/Unmount_NTFS,FAT32,ext3_Partitions_in_Opensolaris_5.11_snv_101b"]here[/url] - note 2G limit. -- This message posted from opensolaris.org