I''m looking to build a cheap SAN for personal home use. I''m looking at 6 300 Gig SATA drives and whatever hardware supported by the OS. It''s been suggested zfs and raidz under opensolaris would be a good choice. I''ve built a lot of these for commercial purposes and swore by SCSI or Fibre Channel drives with dedicated controllers. I scoffed at software RAID solutions.... zfs looks interesting and from what I''ve seen and the fact the machine is a dedicated fileserver I can make the excuse the raid ''controller'' is STILL a dedicated controller although it''s a PC motherboard... I''d like to put samba, or some other package if there is something better, on the machine to make it visible to the network. I might even consider NFS if I can find a non-evil implementation ;-). Does this seem reasonable? Is this combination ready for prime-time? Granted data loss will not cost me millions of dollars, but by the same token I''m building this to provide what I hope to be a trusted and reliable storage device. I simply can not find a hardware compatibility list for opensolaris, the few links I find are all dead. What SATA RAID controllers, (simple or full featured) will opensolaris support? What simple multiple connection SATA controllers will opensolaris support? I''d probably like to be able to support 8 drives. Marc
On Tue, Dec 13, 2005 at 04:18:06PM -0700, m christensen wrote:> It''s been suggested zfs and raidz under opensolaris would be a good choice.I would agree with that suggestion. :)> I scoffed at software RAID solutions....See below...> Does this seem reasonable?Yes.> Is this combination ready for prime-time?Again, yes. We''ve been running our home directories on ZFS internally for over 1.5 years, using 48 x 250GB SATA drives and no HW RAID controller.> Granted data loss will not cost me millions of dollars, but by the > same token I''m building this to provide what I hope to be a trusted > and reliable storage device.As the Mastercard ads say, some things are priceless.> I simply can not find a hardware compatibility list for opensolaris, the > few links I find are all dead.Assuming that you''re not running on some strange hardware, most stuff just winds up working. Feel free to post your HW specs to this list to verify compatibility: http://www.opensolaris.org/jive/forum.jspa?forumID=79> What SATA RAID controllers, (simple or full featured) will opensolaris > support?If you plan on using RAID-Z, I wouldn''t think that you''d need a SATA RAID controller. You''d do better letting ZFS manage the redundancy. See this thread for details: http://www.opensolaris.org/jive/message.jspa?messageID=14982#14982> What simple multiple connection SATA controllers will opensolaris support? > I''d probably like to be able to support 8 drives.My favorite is this one: http://www.supermicro.com/products/accessories/addon/AoC-SAT2-MV8.cfm But the drivers have not yet made it into Nevada. I''m told the ETA is before the end of this year. If that would be a problem for you, let me know. --Bill
Bill answered your questions, but i''m curious about one comment... m christensen wrote:> I''m looking to build a cheap SAN for personal home use. > > I''m looking at 6 300 Gig SATA drives and whatever hardware supported > by the OS. > > It''s been suggested zfs and raidz under opensolaris would be a good > choice. > > I''ve built a lot of these for commercial purposes and swore by SCSI or > Fibre Channel > drives with dedicated controllers. > > I scoffed at software RAID solutions.... > > zfs looks interesting and from what I''ve seen and the fact the machine > is a dedicated fileserver > I can make the excuse the raid ''controller'' is STILL a dedicated > controller although it''s a PC motherboard... > > I''d like to put samba, or some other package if there is something > better, on the machine to make it > visible to the network. > > I might even consider NFS if I can find a non-evil implementation ;-).Any details on this? Solaris NFS is quite non-evil. I would use it over samba (if possible). If there''s something you''ve found that isn''t working how you want it to, please let us know. eric> > Does this seem reasonable? > Is this combination ready for prime-time? > Granted data loss will not cost me millions of dollars, but by the > same token I''m building > this to provide what I hope to be a trusted and reliable storage device. > > I simply can not find a hardware compatibility list for opensolaris, > the few links I find are all dead. > > What SATA RAID controllers, (simple or full featured) will opensolaris > support? > What simple multiple connection SATA controllers will opensolaris > support? > I''d probably like to be able to support 8 drives. > > Marc > > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 12/14/05 06:48, Bill Moore wrote:>>Is this combination ready for prime-time? > > > Again, yes. We''ve been running our home directories on ZFS internally > for over 1.5 years, using 48 x 250GB SATA drives and no HW RAID > controller.Hi Bill, and anyone who can suggest suitable raid-z approaches using T3 arrays. The build nfs server I look after is currently 95% ufs and just a small amount of zfs (old bits at that). I''m going to upgrade to snv_29 and migrate the world to zfs. We have 4 x T3 arrays, arranged as 2 partner pairs. Are there any suggestions/recommendations as to what luns/volumes I should construct on the T3s and present to zfs for raid-z use. I suspect (in my considerable T3 experise, ha ha) that they probably cannot present each of the 9 drives in the array as a single lun so I likely can''t treat them as a jbod. So I imagined that I''ll be obliged to using RAID-1 (ie, striped) T3 volumes - undecided how many drives to use per lun if that is the route to go. I''d guess or hope that striped volumes should help the array performance with zfs. Any recommendations or "this works for me" experiences welcomed. Given our current software-mirrored RAID-5 approach on this storage I''ll be able to break the mirrors and use the freed storage in my initial zfs vdevs while keeping the live ufs data available in the unmirrored RAID-5 volumes. So I will be able to experiment with zfs vdevs on around 2 of the 4 bricks before I have to commit the config. Thanks Gavin
Thanks for your time and the info. I''m going to order parts today. My only real questions as far as hardware would be Athlon 64 support and which disk controllers would be supported. I can get the controller you listed for $110.00 seems fair enough. Considering I''ve never even touched opensolaris it may take several weeks to get everything together and running anyway. I assume YOU are currently using it under opensolaris and zfs. I will buy one with a reasonable expectation it''ll be supported shortly. As to NFS, over the years it''s always had a bad reputation for being slow, buggy and temperamental. I have personally seen some of these issues over the years. One of the worst was under HPUX. Not to offend anyone... Thanks again. Marc Bill Moore wrote:>On Tue, Dec 13, 2005 at 04:18:06PM -0700, m christensen wrote: > > >>It''s been suggested zfs and raidz under opensolaris would be a good choice. >> >> > >I would agree with that suggestion. :) > > > >>I scoffed at software RAID solutions.... >> >> > >See below... > > > >>Does this seem reasonable? >> >> > >Yes. > > > >>Is this combination ready for prime-time? >> >> > >Again, yes. We''ve been running our home directories on ZFS internally >for over 1.5 years, using 48 x 250GB SATA drives and no HW RAID >controller. > > > >>Granted data loss will not cost me millions of dollars, but by the >>same token I''m building this to provide what I hope to be a trusted >>and reliable storage device. >> >> > >As the Mastercard ads say, some things are priceless. > > > >>I simply can not find a hardware compatibility list for opensolaris, the >>few links I find are all dead. >> >> > >Assuming that you''re not running on some strange hardware, most stuff >just winds up working. Feel free to post your HW specs to this list >to verify compatibility: > > http://www.opensolaris.org/jive/forum.jspa?forumID=79 > > > >>What SATA RAID controllers, (simple or full featured) will opensolaris >>support? >> >> > >If you plan on using RAID-Z, I wouldn''t think that you''d need a SATA >RAID controller. You''d do better letting ZFS manage the redundancy. >See this thread for details: > > http://www.opensolaris.org/jive/message.jspa?messageID=14982#14982 > > > >>What simple multiple connection SATA controllers will opensolaris support? >>I''d probably like to be able to support 8 drives. >> >> > >My favorite is this one: > > http://www.supermicro.com/products/accessories/addon/AoC-SAT2-MV8.cfm > >But the drivers have not yet made it into Nevada. I''m told the ETA is >before the end of this year. If that would be a problem for you, let me >know. > > >--Bill > >
Gavin Maltby wrote:> On 12/14/05 06:48, Bill Moore wrote: >>> Is this combination ready for prime-time? >> Again, yes. We''ve been running our home directories on ZFS internally >> for over 1.5 years, using 48 x 250GB SATA drives and no HW RAID >> controller. > Hi Bill, and anyone who can suggest suitable raid-z approaches using T3 > arrays. > The build nfs server I look after is currently 95% ufs and just a small > amount of zfs (old bits at that). I''m going to upgrade to snv_29 and> migrate the world to zfs.> We have 4 x T3 arrays, arranged as 2 partner pairs. > Are there any suggestions/recommendations as to what luns/volumes I should > construct on the T3s and present to zfs for raid-z use. I suspect (in > my considerable > T3 experise, ha ha) that they probably cannot present each of the 9 > drives in the array as a single lun so I likely can''t treat them > as a jbod. So I imagined that I''ll be obliged to using RAID-1 > (ie, striped) T3 volumes - undecided how many drives to use > per lun if that is the route to go. I''d guess or hope that > striped volumes should help the array performance with zfs.Hi Gavin, I assume you are using either T3 or T3B arrays - in which case you are limited to having two luns per brick. It''s up to you how you''d go about slicing/dicing them, but if you want to maximise # luns and keep the hw raid controller benefits then I''d go with two raid-1 luns of 4 disks each, leaving disk 9 (uXd9) as the designated hot spare for the luns in the brick. That will give you 2x2 luns in your partner pairs - just enough iirc for a raid-z config. Support Services has an EIS checklist for T3/T3B (as well as just about everything else which we provide an installation service for) which you might find handy. (EIS is Enterprise Installation Services). best regards, James C. McPherson -- Solaris Datapath Engineering Data Management Group Sun Microsystems
Hi, Marc. Did you ever build this. I''m interested a nearly identical project to use as a home media server. If you did build it successfully, can you please post what hardware you ended up using. My goal is to build something pretty inexpensively without being low-quality... but I don''t need super-high performance. Thanks, Tom This message posted from opensolaris.org
Tom, Just as a reference, here''s what I built: Supermicro H8DCE motherboard Two AMD Opteron 246 processors (2 GHz) One Seagate 160GB IDE drive for the OS Four Seagate 250GB SATA drives for ZFS raidz array Supermicro SCT-743 rackmount chassis I''ve got the four 250GB Sata drives configured into one large ZFS raidz pool, and users access this via Samba from their Windows PCs. This message posted from opensolaris.org
Here''s what I recently built, much lower-end than sodaant''s: MSI Neo4-F motherboard AMD 64 3200+ (Venice) 4x 300GB Seagate 7200.8 SATA (attached to onboard SATA controllers). 2x 512MB Corsair VS RAM Antec P150 case + 2x 92mm Antec fans (for the 3.5" drive bays) Cheapest PCI-based Nvidia card I could find (eVGA FX 5200, I think). Note PCI, not PCI-e - I left the PCI-e slot open, thinking maybe down the road, if I add disks, I could throw a SATA controller in x16 slot, as I have no use for graphics performance on a server. Misc. cables, optical drive, etc This is more than powerful enough to not be the bottleneck, with a good chunk of headroom, serving files over NFS using gigabit. If I need to expand later, I''ll add an enclosure that converts my 3 empty 5.25" bays into 5 3.5" front-loading hotswap bays. I posted some performance numbers here: http://opensolaris.org/jive/thread.jspa?threadID=4808&tstart=0 This message posted from opensolaris.org
Ben Lazarus wrote:> Here''s what I recently built, much lower-end than sodaant''s: > > MSI Neo4-F motherboard > AMD 64 3200+ (Venice) > 4x 300GB Seagate 7200.8 SATA (attached to onboard SATA controllers). > 2x 512MB Corsair VS RAM > Antec P150 case + 2x 92mm Antec fans (for the 3.5" drive bays) > Cheapest PCI-based Nvidia card I could find (eVGA FX 5200, I think). Note PCI, not PCI-e - I left the PCI-e slot open, thinking maybe down the road, if I add disks, I could throw a SATA controller in x16 slot, as I have no use for graphics performance on a server. > Misc. cables, optical drive, etc >I''ve built something similar: AMD 64 3200+ (Venice) Tyan 2865 mobo 4x 250GB 7200.8 SATA (attached to onboard SATA controllers) 2x 512MB Corsair VS RAM Antec SuperLanboy case (because it''s quiet and has 120 MM drive cooling fan) Antec NeoHE 430W power supply Asus 6600GT Nvidia PCI-E graphics card DVD reader/writer Additional NIC cards This will be the new house server, and doubles as my desktop to help cut power bills. I also upgraded the main switch to gb... Nice and fast... ZFS would like a bit more RAM, though; I notice that the system becomes sluggish when I write 100+MB a second to the drives. - Bart -- Bart Smaalders Solaris Kernel Performance barts at cyber.eng.sun.com http://blogs.sun.com/barts
Sorry, I unsubscribed to the list a few days ago and just saw your message. My experience follows... Thomas Grossi wrote:>Hi, Marc. > >Did you ever build this. I''m interested a nearly identical project to use as a home media server. If you did build it successfully, can you please post what hardware you ended up using. My goal is to build something pretty inexpensively without being low-quality... but I don''t need super-high performance. > >Thanks, >Tom >This message posted from opensolaris.org >_______________________________________________ >zfs-discuss mailing list >zfs-discuss at opensolaris.org >http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > >I purchased the suggested AOC-SAT2-MV8 SATA controller. I never did get the drivers to make it work with solaris. I bought 2 motherboards the DFI Lanparty motherboard finally worked $135.00 I used a 4000+ Dual Core Athlon CPU (Think) @$320.00 1 Gig Ram at about $70.00 Power Supply @$30.00 (Was actually quite impressed with this one, Very Quiet) -Unused- Supermicro Disk Ctlr above $130.00. Not paying attention I bought the controller only to find what many resellers call PCI-X is really PCI-eXtended which is pretty much a Video card slot and NOT true PCI-X which is a 64 Bit Bus. I''m still trying to figure out if I can run the card in a standard PCI slot at reduced throughput. It boots fine and finds the attached disks from BIOS just fine, but I''m still having fits making them work under ANY Operating system. 6 300 Gb Maxline III 1,000,000 Hr MTBF SATA Drives $120.00 1 250 Gb ATA Boot Drive $89.00. I FINALLY after several days of messing around Got OpenSolaris 28? to install. I Got 4 of the disks to work Via the 4 SATA Connectors native on the motherboard. After more pain and research than should have been required (IMHO) I got a working C compiler on the machine and built the raidz volume with a single command. VERY impressive. I built Bonnie, not bonnie++ and ran some tests. Boot Disk to raid-dir and raid-dir to raid-dir file copies ran about the same speed of about 2.4 GB/Minute. Bonnie Showed about 54 MB/Sec Random writes and about 70 MB/Sec sequential writes. Sequential Block Reads were about 80 something. After The Boot filesystem on the machine failed for the third time and repeating the OS install I gave up. I saw random system hangs and about 50% of the time the machine failed to boot. I started looking at other operating systems. I installed ubuntu with a linux 2.6.12 kernel, I built a software raid using mtadm, I built an XFS filesystem. This appears to be the one good thing that came out of SGI ;-). Using the exact same hardware I saw Random writes about 48 MB/Sec and sequential writes of about 50 MB/Sec or about 10-15% slower when I compared my logged results. BUT Reads were a little more than twice as fast. I can sustain read rates of 138Meg/Sec indefinitely. I Really liked ZFS, but the machine itself needs to be stable and reliable or I can''t trust my storage array as a whole. I just could not get that with opensolaris on my machines. I''m still messing with it but I''ll probably end up using a linux variant and XFS or reiser. I''m still looking for a decent driver for the SATA card If I were to do it again, I''d buy the next DFI motherboard up the line which includes 8 SATA connectors on the motherboard for an extra 40 bucks. Feel Free to post this to the list, if it rejects my post. Thanks all for the help. Marc
Actually I misremembered my test results. This is what I got from the latest releases of ubuntu and suse. UBUNTU------------------------------------------------------------------------------------- Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP array 2G 48001 80 160477 38 35101 9 46012 81 178580 32 394.5 1 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1680 13 +++++ +++ 1426 12 1681 15 +++++ +++ 815 7 array,2G,48001,80,160477,38,35101,9,46012,81,178580,32,394.5,1,16,1680,13,+++++,+++,1426,12,1681,15,+++++,+++,815,7 Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP array 8G 47556 79 139984 25 34492 9 47476 83 175737 29 261.2 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1764 15 +++++ +++ 1283 10 1805 15 +++++ +++ 1007 9 array,8G,47556,79,139984,25,34492,9,47476,83,175737,29,261.2,0,16,1764,15,+++++,+++,1283,10,1805,15,+++++,+++,1007,9 SUSE 10.1 Beta-------------------------------------------------------------------------------- on 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP linux 2G 52031 97 111632 38 33457 14 38673 69 171779 38 373.5 1 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1354 10 +++++ +++ 1660 11 976 8 +++++ +++ 1185 11 linux,2G,52031,97,111632,38,33457,14,38673,69,171779,38,373.5,1,16,1354,10,+++++,+++,1660,11,976,8,+++++,+++,1185,11 Marc Thomas Grossi wrote:>Hi, Marc. > >Did you ever build this. I''m interested a nearly identical project to use as a home media server. If you did build it successfully, can you please post what hardware you ended up using. My goal is to build something pretty inexpensively without being low-quality... but I don''t need super-high performance. > >Thanks, >Tom >This message posted from opensolaris.org >_______________________________________________ >zfs-discuss mailing list >zfs-discuss at opensolaris.org >http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > >
I''ve had a similar experience and I''m beginning to consider switching to linux because Solaris just hasn''t been stable enough. I''ve had a ZFS panic, CPU utilization problems, and several assorted hangs and this is just too much for a production machine. This message posted from opensolaris.org
Jerry Gardner wrote:> I''ve had a similar experience and I''m beginning to consider switching > to linux because Solaris just hasn''t been stable enough. I''ve had a > ZFS panic, CPU utilization problems, and several assorted hangs and > this is just too much for a production machine.Hi Jerry, I''m quite surprised to find that you are using Solaris Express in production. Your ZFS panic appears to have been caused by a shortage of memory for the particular cache(s) that zfs requires. This is a known issue (more prevalent on 32bit than on 64bit) -- and there is ongoing work to resolve it. You are also running build 27 -- not the latest. Your cpu utilisation is quite likely to have also been related to the ZFS memory usage issue. Again, this is known. I have only just (last 30 minutes) had a chance to pull across your crash dump and do a brief analysis of it. If you want a stable production system, run Solaris 10. That''s the fully-tested version of Solaris that we ship. Solaris Express is a snapshot of the development build and as such we expect that there will be a certain lack of stability. One consideration when running a Solaris Express version is whether you are willing to put up with that in order to help Sun and the OpenSolaris community make it a better product. best regards, James C. McPherson -- Solaris Datapath Engineering Data Management Group Sun Microsystems
When will Solaris 10 (not Express) support ZFS? This message posted from opensolaris.org
On Wed, Jan 18, 2006 at 02:15:01PM -0800, Jerry Gardner wrote:> When will Solaris 10 (not Express) support ZFS?See: http://www.opensolaris.org/os/community/zfs/faq/#whenavailable - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock