While I am about to embark on building a home NAS box using OpenSolaris with ZFS. Currently I have a chassis that will hold 16 hard drives, although not in caddies - down time doesn''t bother me if I need to switch a drive, probably could do it running anyways just a bit of a pain. :) I am after suggestions of motherboard, CPU and ram. Basically I want ECC ram and at least two PCI-E x4 channels. As I want to run 2 x AOC-USAS_L8i cards for 16 drives. I want something with a bit of guts but over the top. I know the HCL is there but I want to see what other people are using in their solutions. -- This message posted from opensolaris.org
Nathan wrote:> While I am about to embark on building a home NAS box using OpenSolaris with ZFS. > > Currently I have a chassis that will hold 16 hard drives, although not in caddies - down time doesn''t bother me if I need to switch a drive, probably could do it running anyways just a bit of a pain. :) > > I am after suggestions of motherboard, CPU and ram. Basically I want ECC ram and at least two PCI-E x4 channels. As I want to run 2 x AOC-USAS_L8i cards for 16 drives. > > I want something with a bit of guts but over the top. I know the HCL is there but I want to see what other people are using in their solutions. >Go back and look through the archives for this list. We just had this discussion last month. Let''s not rehash it again, as it seems to get redone way too often. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
Tim Foster
2009-Sep-25 09:18 UTC
[zfs-discuss] Collecting hardware configurations (was Re: White box server for OpenSolaris)
On Fri, 2009-09-25 at 01:32 -0700, Erik Trimble wrote:> Go back and look through the archives for this list. We just had this > discussion last month. Let''s not rehash it again, as it seems to get > redone way too often.You know, this seems like such a common question to the list, would we (the zfs community) be interested in coming up with a rolling set of ''recommended'' systems that home users could use as a reference, rather than requiring people to trawl through the archives each time? Perhaps a few tiers, with as many user-submitted systems per-tier as we get. * small boot disk + 2 or 3 disks, low power, quiet, small media server * medium boot disk + 3 - 9 disks, home office, larger media server * large boot disk + 9 or more disks, thumper-esque and keep them up to date as new hardware becomes available, with a bit of space on a website somewhere to manage them. These could either be off-the-shelf dedicated NAS systems, or build-to-order machines, but getting their configuration & last-known-price would be useful. I don''t have enough experience myself in terms of knowing what''s the best hardware on the market, but from time to time, I do think about upgrading my system at home, and would really appreciate a zfs-community-recommended configuration to use. Any takers? cheers, tim
Eugen Leitl
2009-Sep-25 09:31 UTC
[zfs-discuss] Collecting hardware configurations (was Re: White box server for OpenSolaris)
On Fri, Sep 25, 2009 at 10:18:15AM +0100, Tim Foster wrote:> I don''t have enough experience myself in terms of knowing what''s the > best hardware on the market, but from time to time, I do think about > upgrading my system at home, and would really appreciate a > zfs-community-recommended configuration to use. > > Any takers?I''m willing to contribute (zfs on Opensolaris, mostly Supermicro boxes and FreeNAS (FreeBSD 7.2, next 8.x probably)). Is there a wiki for that somewhere? -- Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
It does seem to come up regularly... perhaps someone with access could throw up a page under the ZFS community with the conclusions (and periodic updates as appropriate).. On Fri, Sep 25, 2009 at 3:32 AM, Erik Trimble <Erik.Trimble at sun.com> wrote:> Nathan wrote: >> >> While I am about to embark on building a home NAS box using OpenSolaris >> with ZFS. >> >> Currently I have a chassis that will hold 16 hard drives, although not in >> caddies - down time doesn''t bother me if I need to switch a drive, probably >> could do it running anyways just a bit of a pain. :) >> >> I am after suggestions of motherboard, CPU and ram. ?Basically I want ECC >> ram and at least two PCI-E x4 channels. ?As I want to run 2 x AOC-USAS_L8i >> cards for 16 drives. >> >> I want something with a bit of guts but over the top. ?I know the HCL is >> there but I want to see what other people are using in their solutions. >> > > Go back and look through the archives for this list. We just had this > discussion last month. Let''s not rehash it again, as it seems to get redone > way too often. > > > > -- > Erik Trimble > Java System Support > Mailstop: ?usca22-123 > Phone: ?x17195 > Santa Clara, CA > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Cindy Swearingen
2009-Sep-25 14:44 UTC
[zfs-discuss] Collecting hardware configurations (was Re: White box server for OpenSolaris)
The opensolaris.org site will be transitioning to a wiki-based site soon, as described here: http://www.opensolaris.org/os/about/faq/site-transition-faq/ I think it would be best to use the new site to collect this information because it will be much easier for community members to contribute. I''ll provide a heads up when the transition, which has been delayed, is complete. Cindy On 09/25/09 03:31, Eugen Leitl wrote:> On Fri, Sep 25, 2009 at 10:18:15AM +0100, Tim Foster wrote: > >> I don''t have enough experience myself in terms of knowing what''s the >> best hardware on the market, but from time to time, I do think about >> upgrading my system at home, and would really appreciate a >> zfs-community-recommended configuration to use. >> >> Any takers? > > I''m willing to contribute (zfs on Opensolaris, mostly Supermicro > boxes and FreeNAS (FreeBSD 7.2, next 8.x probably)). Is there a > wiki for that somewhere? >
> I am after suggestions of motherboard, CPU and ram. > Basically I want ECC ram and at least two PCI-E x4 > channels. As I want to run 2 x AOC-USAS_L8i cards > for 16 drives.Asus M4N82 Deluxe. I have one running with 2 USAS-L8i cards just fine. I don''t have all the drives loaded in yet, but the cards are detected and they can use the drives I do have attached. I currently have 8GB of ECC RAM on the board and it''s working fine. The ECC options in the BIOS are enabled and it reports the ECC is enabled at boot. It has 3 PCIe x16 slots, I have a graphics card in the other slot, and an Intel e1000g card in the PCIe x1 slot. The onboard peripherals all work, with the exception of the onboard AHCI ports being buggy in b123 under xVM. Not sure what that''s all about, I posted in the main discussion board but haven''t heard if it''s a known bug or if it will be fixed in the next version. It would be nice as my boot drives are on that controller. 2009.06 works fine though. CPU is a Phenom II X3 720. Probably overkill for fileserver duties, but I also want to do some VMs for other things, thus the bug I found with the xVM updates. -- This message posted from opensolaris.org
On 09/25/09 11:08 AM, Travis Tabbal wrote:> ... haven''t heard if it''s a known > bug or if it will be fixed in the next version...Out of courtesy to our host, Sun makes some quite competitive X86 hardware. I have absolutely no idea how difficult it is to buy Sun machines retail, but it seems they might be missing out on an interesting market - robust and scalable SOHO servers for the DYI gang - certainly OEMS like us recommend them, although there doesn''t seem to be a single-box file+application server in the lineup which might be a disadvantage to some. Also, assuming Oracle keeps the product line going, we plan to give them a serious look when we finally have to replace those sturdy old SPARCS. Unfortunately there aren''t entry level SPARCs in the lineup, but sadly there probably isn''t a big enough market to justify them and small developers don''t need the big iron. It would be interesting to hear from Sun if they have any specific recommendations for the use of Suns for the DYI SOHO market; AFAIK it is the profits from hardware that are going a long way to support Sun''s support of FOSS that we are all benefiting from, and there''s a good bet that OpenSolaris will run well on Sun hardware :-) Cheers -- Frank
On 25-Sep-09, at 2:58 PM, Frank Middleton wrote:> On 09/25/09 11:08 AM, Travis Tabbal wrote: >> ... haven''t heard if it''s a known >> bug or if it will be fixed in the next version... > > Out of courtesy to our host, Sun makes some quite competitive > X86 hardware. I have absolutely no idea how difficult it is > to buy Sun machines retail,Not very difficult. And there is try and buy. People overestimate the cost of Sun, and underestimate the real value of "fully integrated". --Toby> but it seems they might be missing > out on an interesting market - robust and scalable SOHO servers > for the DYI gang ... > > Cheers -- Frank > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Enrico Maria Crisostomo
2009-Sep-25 21:37 UTC
[zfs-discuss] White box server for OpenSolaris
On Fri, Sep 25, 2009 at 10:56 PM, Toby Thain <toby at telegraphics.com.au> wrote:> > On 25-Sep-09, at 2:58 PM, Frank Middleton wrote: > >> On 09/25/09 11:08 AM, Travis Tabbal wrote: >>> >>> ... haven''t heard if it''s a known >>> bug or if it will be fixed in the next version... >> >> Out of courtesy to our host, Sun makes some quite competitive >> X86 hardware. I have absolutely no idea how difficult it is >> to buy Sun machines retail, > > Not very difficult. And there is try and buy.Indeed, at least in Spain and in Italy I had no problem buying workstations. Recently I owned both Sun Ultra 20 M2 and Ultra 24. I had a great feeling with them and price seemed very competitive to me, compared to offers of other mainstream hardware providers.> > People overestimate the cost of Sun, and underestimate the real value of > "fully integrated".+1. People like "fully integration" when it comes, for example, to Apple, iPods and iPhones. When it comes, just to make another example..., to Solaris, ZFS, ECC memory and so forth (do you remember those posts some time ago?), they quickly forget.> > --Toby > >> but it seems they might be missing >> out on an interesting market - robust and scalable SOHO servers >> for the DYI gang ... >> >> Cheers -- Frank >> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- ????????? ? ??????? "Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning." GPG key: 1024D/FD2229AF
From a product standpoint, expanding the variety available in the Storage 7000 (Amber Road) line is somewhere I think we''d (Sun) make bank on. Things like: [ for the home/very small business market ] Mini-Tower sized case, 4-6 3.5" HS SATA-only bays (to take the X2200-style spud bracket drives), 2 CF slots (for boot), single-socket, with 4 DIMMs, and a built-in ILOM. /maybe/ a x4 PCI-E slot, but maybe not. [ for the small business/branch office with no racks] Mid-tower case, 4-bay 2.5" HS area, 6-8 bay 3.5" HS area, single socket, 4/6 DIMMs, ILOM. (2) x4 or x8 PCI-E slots too. (I''d probably go with Socket AM3, with ECC, of course) I''d sell them in both fully loaded with the Amber Road software (and mandatory Service Contract), and no-OS Loaded, no-Service Contract appliance versions. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA