Ilya Tatar
2009-Feb-24 04:24 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
Hello, I am building a home file server and am looking for an ATX mother board that will be supported well with OpenSolaris (onboard SATA controller, network, graphics if any, audio, etc). I decided to go for Intel based boards (socket LGA 775) since it seems like power management is better supported with Intel processors and power efficiency is an important factor. After reading several posts about ZFS it looks like I want ECC memory as well. Does anyone have any recommendations? Here are a few that I found. Any comments about those? Supermicro C2SBX+ http://www.supermicro.com/products/motherboard/Core2Duo/X48/C2SBX+.cfm Gigabyte GA-X48-DS4 gigabyte: http://www.gigabyte.com.tw/Products/Motherboard/Products_Overview.aspx?ProductID=2810 Intel S3200SHV http://www.intel.com/Products/Server/Motherboards/Entry-S3200SH/Entry-S3200SH-overview.htm Thanks for any help, -Ilya
Neal Pollack
2009-Feb-24 17:05 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
On 02/23/09 20:24, Ilya Tatar wrote:> Hello, > I am building a home file server and am looking for an ATX mother > board that will be supported well with OpenSolaris (onboard SATA > controller, network, graphics if any, audio, etc). I decided to go for > Intel based boards (socket LGA 775) since it seems like power > management is better supported with Intel processors and power > efficiency is an important factor. After reading several posts about > ZFS it looks like I want ECC memory as well. > > Does anyone have any recommendations?Any motherboard for the Core2 or Core i7 Intel processors with the ICH southbridge (desktop boards) or ESB2 soutbridge (server boards) will be well supported. I recommend an actual Intel board since they also always use the Intel network chip (well supported and tuned). Many of the third party boards from MSI, Gigabyte, Asus, DFI, ECS, and others also work, but for some (penny pinching) reason, they tend to use network chips like Marvell that are not yet supported, or Realtek, for which some of the models are supported. So using an actual board from Intel Corp will be best supported right out of the box. For that matter, because of the work we do with Intel, almost any of their boards will be supported using the ICH 6, 7, 8, 9, or ICH10 SATA ports in either legacy or AHCI mode. Again, almost any version of the Intel network (NIC) chips are supported across all their boards. If you are able to find one that is not, I''d love to hear about it and add it to our work queue. In the most recent builds of Solaris Nevada (SXCE), the integrated Intel graphics found on many of the boards is well supported. On other boards, use a low end VGA card. Again, if you find an Intel board where the graphics is not supported or not working, please let us know the specifics and we''ll fix it. Cheers, Neal> > Here are a few that I found. Any comments about those? > > Supermicro C2SBX+ > http://www.supermicro.com/products/motherboard/Core2Duo/X48/C2SBX+.cfm > > Gigabyte GA-X48-DS4 > gigabyte: > http://www.gigabyte.com.tw/Products/Motherboard/Products_Overview.aspx?ProductID=2810 > > > Intel S3200SHV > http://www.intel.com/Products/Server/Motherboards/Entry-S3200SH/Entry-S3200SH-overview.htm > > > Thanks for any help, > -Ilya > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Carson Gaspar
2009-Feb-24 17:13 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
Neal Pollack wrote:> On 02/23/09 20:24, Ilya Tatar wrote:...>> efficiency is an important factor. After reading several posts about >> ZFS it looks like I want ECC memory as well....> > Any motherboard for the Core2 or Core i7 Intel processors with the ICHNot. Intel decided we don''t need ECC memory on the Core i7 (one of the few truly idiotic things I can remember them doing lately). The OP specified ECC RAM, so Core i7 is a no go. Thanks for nothing, Intel. -- Carson
Rob Logan
2009-Feb-24 18:27 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
> Not. Intel decided we don''t need ECC memory on the Core i7I thought that was a Core i7 vs Xeon E55xx for socket LGA-1366 so that''s why this X58 MB claims ECC support: http://supermicro.com/products/motherboard/Xeon3000/X58/X8SAX.cfm
Miles Nordin
2009-Feb-24 19:50 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server -- ECC claims
>>>>> "rl" == Rob Logan <Rob at Logan.com> writes:rl> that''s why this X58 MB claims ECC support: the claim is worth something. People always say ``AMD supports ECC because the memory controller is in the CPU so they all support it, it cannot be taken away from you by lying idiot motherboard manufacturers or greedy marketers trying to segment users into different demand groups'''' but you still need some motherboard BIOS to flip the ECC switch to ``wings stay on'''' mode before you start down the runway. Here is a rather outdated and Linux-specific workaround for cheapo AMD desktop boards that don''t have an ECC option in their BIOS: http://newsgroups.derkeiler.com/Archive/Alt/alt.comp.periphs.mainboard.asus/2005-10/msg00365.html http://hyvatti.iki.fi/~jaakko/sw/ The discussion about ECC-only vs scrub-and-fix, about how to read from PCI if ECC errors are happening (though not necessarily which stick), and his 10-ohm testing method, is also interesting. I still don''t understand what chip-kill means. I remember something about a memory scrubbing kernel thread in Solaris. This sounds like the AMD chips have a hardware scrubber? Also how are ECC errors reported in Solaris? I guess this is getting OT though. Anyway ECC is not just a feature bullet to gather up and feel good. You have to finish the job and actually interact with it. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090224/f2ed572f/attachment.bin>
Carson Gaspar
2009-Feb-24 22:04 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
Rob Logan wrote:> >> Not. Intel decided we don''t need ECC memory on the Core i7 > > I thought that was a Core i7 vs Xeon E55xx for socket > LGA-1366 so that''s why this X58 MB claims ECC support: > http://supermicro.com/products/motherboard/Xeon3000/X58/X8SAX.cfmThey lie*. Read the Intel Core i7 specs - no ECC on any of them. * They claim "future Nehalem processor families". These mysterious future CPUs may indeed support ECC. The Core i7-(920|940|965) do not. -- Carson
On Tue, Feb 24, 2009 at 4:04 PM, Carson Gaspar <carson at taltos.org> wrote:> > They lie*. Read the Intel Core i7 specs - no ECC on any of them. > > * They claim "future Nehalem processor families". These mysterious future > CPUs may indeed support ECC. The Core i7-(920|940|965) do not. >Given the current state of AMD, I think we all know that''s not likely. Why cut into the revenue of your server line chips when you don''t have to? Right? --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090224/edbfc4e6/attachment.html>
Brandon High
2009-Feb-25 20:39 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
On Tue, Feb 24, 2009 at 2:29 PM, Tim <tim at tcsac.net> wrote:> Given the current state of AMD, I think we all know that''s not likely. Why > cut into the revenue of your server line chips when you don''t have to? > Right?AMD has nothing to do with whether ECC exists on the Nehalem. Most likely ECC is in the memory controller of the Nehalem die, it''s just disabled on the i7. It wouldn''t make any sense to tape out a whole new die for the server version of the chip. The Xeon could use another stepping, but I''d expect Intel to use the same on both consumer and server versions of the chip. -B -- Brandon High : bhigh at freaks.com
On Wed, Feb 25, 2009 at 2:39 PM, Brandon High <bhigh at freaks.com> wrote:> On Tue, Feb 24, 2009 at 2:29 PM, Tim <tim at tcsac.net> wrote: > > Given the current state of AMD, I think we all know that''s not likely. > Why > > cut into the revenue of your server line chips when you don''t have to? > > Right? > > AMD has nothing to do with whether ECC exists on the Nehalem. >Of course it does. Competition directly affects the features provided on everyone in a market segment''s products.> > Most likely ECC is in the memory controller of the Nehalem die, it''s > just disabled on the i7. It wouldn''t make any sense to tape out a > whole new die for the server version of the chip. The Xeon could use > another stepping, but I''d expect Intel to use the same on both > consumer and server versions of the chip. >The fact Intel put a memory controller on die is PROOF that AMD has a direct effect on their product roadmap. Do you think Intel would have willingly killed off their lucrative northbridge chipset business without AMD forcing their hand? Please. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090225/e3449e42/attachment.html>
Brandon High
2009-Feb-25 22:17 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
On Wed, Feb 25, 2009 at 12:52 PM, Tim <tim at tcsac.net> wrote:> Of course it does. Competition directly affects the features provided on > everyone in a market segment''s products.The server and workstation market demands ECC. Any die that would be used in the server or workstation market would need to have ECC.> The fact Intel put a memory controller on die is PROOF that AMD has a direct > effect on their product roadmap. Do you think Intel would have willingly > killed off their lucrative northbridge chipset business without AMD forcing > their hand? Please.Intel moved to on-die memory controller because the front side bus architecture was becoming a bottleneck as the number of cores increased. The fact that AMD''s chips already have an on-die controller certainly influenced Intel''s direction - I''m not disputing that. The fact of the matter is that an on-die MC is an efficient way to to have high bandwidth and low latency access to memory. The IBM POWER 6 has on-die memory controllers as well, which is less likely to be due to any market pressure caused by AMD since the two firms'' products don''t directly compete. It''s just a reasonable engineering decision. -B -- Brandon High : bhigh at freaks.com
Blake
2009-Feb-26 16:52 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server -- ECC claims
IIRC, the AMD board I have at my office has hardware ECC scrub. I have no idea if Solaris knows about this or makes any use of it (or needs to?) On Tue, Feb 24, 2009 at 2:50 PM, Miles Nordin <carton at ivy.net> wrote:>>>>>> "rl" == Rob Logan <Rob at Logan.com> writes: > > ? ?rl> that''s why this X58 MB claims ECC support: > > the claim is worth something. ?People always say ``AMD supports ECC > because the memory controller is in the CPU so they all support it, it > cannot be taken away from you by lying idiot motherboard manufacturers > or greedy marketers trying to segment users into different demand > groups'''' but you still need some motherboard BIOS to flip the ECC > switch to ``wings stay on'''' mode before you start down the runway. > > Here is a rather outdated and Linux-specific workaround for cheapo AMD > desktop boards that don''t have an ECC option in their BIOS: > > ?http://newsgroups.derkeiler.com/Archive/Alt/alt.comp.periphs.mainboard.asus/2005-10/msg00365.html > ?http://hyvatti.iki.fi/~jaakko/sw/ > > The discussion about ECC-only vs scrub-and-fix, about how to read from > PCI if ECC errors are happening (though not necessarily which stick), > and his 10-ohm testing method, is also interesting. ?I still don''t > understand what chip-kill means. > > I remember something about a memory scrubbing kernel thread in > Solaris. ?This sounds like the AMD chips have a hardware scrubber? > Also how are ECC errors reported in Solaris? ?I guess this is getting > OT though. > > Anyway ECC is not just a feature bullet to gather up and feel good. > You have to finish the job and actually interact with it. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >
When I type `zpool import` to see what pools are out there, it gets to /1: open("/dev/dsk/c5t2d0s0", O_RDONLY) = 6 /1: stat64("/usr/local/apache2/lib/libdevid.so.1", 0x08042758) Err#2 ENOENT /1: stat64("/usr/lib/libdevid.so.1", 0x08042758) = 0 /1: d=0x02D90002 i=241208 m=0100755 l=1 u=0 g=2 sz=61756 /1: at = Apr 29 23:41:17 EDT 2009 [ 1241062877 ] /1: mt = Apr 27 01:45:19 EDT 2009 [ 1240811119 ] /1: ct = Apr 27 01:45:19 EDT 2009 [ 1240811119 ] /1: bsz=61952 blks=122 fs=zfs /1: resolvepath("/usr/lib/libdevid.so.1", "/lib/libdevid.so.1", 1023) = 18 /1: open("/usr/lib/libdevid.so.1", O_RDONLY) = 7 /1: mmapobj(7, 0x00020000, 0xFEC70640, 0x080427C4, 0x00000000) = 0 /1: close(7) = 0 /1: memcntl(0xFEC50000, 4048, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0 /1: fxstat(2, 6, 0x080430C0) = 0 /1: d=0x04A00000 i=50333315 m=0060400 l=1 u=0 g=0 rdev=0x01800340 /1: at = Nov 19 21:19:26 EST 2008 [ 1227147566 ] /1: mt = Nov 19 21:19:26 EST 2008 [ 1227147566 ] /1: ct = Apr 29 23:23:11 EDT 2009 [ 1241061791 ] /1: bsz=8192 blks=1 fs=devfs /1: modctl(MODSIZEOF_DEVID, 0x01800340, 0x080430BC, 0xFEC51239, 0xFE8E92C0) = 0 /1: modctl(MODGETDEVID, 0x01800340, 0x00000038, 0x080D5A48, 0xFE8E92C0) = 0 /1: fxstat(2, 6, 0x080430C0) = 0 /1: d=0x04A00000 i=50333315 m=0060400 l=1 u=0 g=0 rdev=0x01800340 /1: at = Nov 19 21:19:26 EST 2008 [ 1227147566 ] /1: mt = Nov 19 21:19:26 EST 2008 [ 1227147566 ] /1: ct = Apr 29 23:23:11 EDT 2009 [ 1241061791 ] /1: bsz=8192 blks=1 fs=devfs /1: modctl(MODSIZEOF_MINORNAME, 0x01800340, 0x00006000, 0x080430BC, 0xFE8E92C0) = 0 /1: modctl(MODGETMINORNAME, 0x01800340, 0x00006000, 0x00000002, 0x0808FFC8) = 0 /1: close(6) = 0 /1: ioctl(3, ZFS_IOC_POOL_STATS, 0x08042220) = 0 and then the machine dies consistently with: panic[cpu1]/thread=ffffff01d045a3a0: BAD TRAP: type=e (#pf Page fault) rp=ffffff000857f4f0 addr=260 occurred in module "unix" due to a NULL pointer dereference zpool: #pf Page fault Bad kernel fault at addr=0x260 pid=576, pc=0xfffffffffb854e8b, sp=0xffffff000857f5e8, eflags=0x10246 cr0: 8005003b<pg,wp,ne,et,ts,mp,pe> cr4: 6f8<xmme,fxsr,pge,mce,pae,pse,de> cr2: 260 cr3: 12b690000 cr8: c rdi: 260 rsi: 4 rdx: ffffff01d045a3a0 rcx: 0 r8: 40 r9: 21ead rax: 0 rbx: 0 rbp: ffffff000857f640 r10: bf88840 r11: ffffff01d041e000 r12: 0 r13: 260 r14: 4 r15: ffffff01ce12ca28 fsb: 0 gsb: ffffff01ce985ac0 ds: 4b es: 4b fs: 0 gs: 1c3 trp: e err: 2 rip: fffffffffb854e8b cs: 30 rfl: 10246 rsp: ffffff000857f5e8 ss: 38 ffffff000857f3d0 unix:die+dd () ffffff000857f4e0 unix:trap+1752 () ffffff000857f4f0 unix:cmntrap+e9 () ffffff000857f640 unix:mutex_enter+b () ffffff000857f660 zfs:zio_buf_alloc+2c () ffffff000857f6a0 zfs:arc_get_data_buf+173 () ffffff000857f6f0 zfs:arc_buf_alloc+a2 () ffffff000857f770 zfs:dbuf_read_impl+1b0 () ffffff000857f7d0 zfs:dbuf_read+fe () ffffff000857f850 zfs:dnode_hold_impl+d9 () ffffff000857f880 zfs:dnode_hold+2b () ffffff000857f8f0 zfs:dmu_buf_hold+43 () ffffff000857f990 zfs:zap_lockdir+67 () ffffff000857fa20 zfs:zap_lookup_norm+55 () ffffff000857fa80 zfs:zap_lookup+2d () ffffff000857faf0 zfs:dsl_pool_open+91 () ffffff000857fbb0 zfs:spa_load+696 () ffffff000857fc00 zfs:spa_tryimport+95 () ffffff000857fc40 zfs:zfs_ioc_pool_tryimport+3e () ffffff000857fcc0 zfs:zfsdev_ioctl+10b () ffffff000857fd00 genunix:cdev_ioctl+45 () ffffff000857fd40 specfs:spec_ioctl+83 () ffffff000857fdc0 genunix:fop_ioctl+7b () ffffff000857fec0 genunix:ioctl+18e () ffffff000857ff10 unix:brand_sys_sysenter+1e6 () the offending disk, c5t2d0s0, is part of a mirror that if removed I can see the results (from the other mirror half) and the machine does not crash. all 8 labels look diff perfect version=13 name=''r'' state=0 txg=2110897 pool_guid=10861732602511278403 hostid=13384243 hostname=''nas'' top_guid=6092190056527819247 guid=16682108003687674581 vdev_tree type=''mirror'' id=0 guid=6092190056527819247 whole_disk=0 metaslab_array=23 metaslab_shift=31 ashift=9 asize=320032473088 is_log=0 children[0] type=''disk'' id=0 guid=16682108003687674581 path=''/dev/dsk/c5t2d0s0'' devid=''id1,sd at f31cd3f064658835c000a06290005/a'' phys_path=''/pci at 0,0/pci15d9,d280 at 1f,2/disk at 2,0:a'' whole_disk=0 DTL=72 children[1] type=''disk'' id=1 guid=3306076269030000850 path=''/dev/dsk/c5t1d0s0'' devid=''id1,sd at SATA_____WDC_WD3200JD-00K_____WD-WCAMR2427509/a'' phys_path=''/pci at 0,0/pci15d9,d280 at 1f,2/disk at 1,0:a'' whole_disk=0 DTL=54 my question: How can I import half a mirror? 37 % zpool import -f 10861732602511278403 newpool cannot import ''r'' as ''newpool'': invalid vdev configuration
Ok, so the choice for a MB boils down to: - Intel desktop MB, no ECC support - Intel server MB, ECC support, expensive (requires a Xeon for speedstep support). It is a shame to waste top kit doing nothing 24/7. - AMD K8: ECC support(right?), no Cool''n''quiet support (but maybe still cool enough with the right CPU?) - AMD K10: should have the best all of both worlds: ECC support, Cool''n''quiet, cheap-ish and lowish-power CPU like Athlon II 250 Is my understanding correct? Like many I want reliable, cheap, low power, ECC supporting MB. Integrated video and low power chipset would be best. The sata ports will have to come from an additional controller it seems, but that''s life. Intel gear is best supported, but they shoot themselves (or is that that us?) in the foot with their ECC-on-server MB policy. AMD K10 seems the most tempting as it has it all. I wonder about solaris support though. For example, is an AM3 MB OK with solaris? I''d like this hopefully to work right away with opensolaris 2009.06, without fiddling with drivers, I dont have much time or skills. What AM3 MB do you guys know that is trouble free with solaris? If none, maybe top quality ram (suggestions?) would allow me to forego ECC and use a well supported low power intel board (suggestions?) instead? and a E5200? Thanks for your insight. -- This message posted from opensolaris.org
Bob Friesenhahn
2009-Jul-21 02:38 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
On Mon, 20 Jul 2009, chris wrote:> > If none, maybe top quality ram (suggestions?) would allow me to > forego ECC and use a well supported low power intel board > (suggestions?) instead? and a E5200?Even top quality RAM will not protect you from an alpha particle. I would be surprised if the AMD K10 CPU caused any problem for Solaris. The chipset used on the motherboard is probably what you should pay attention to. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Thanks for your reply. What if I wrap the ram in a sheet of lead? ;-) (hopefully the lead itself won''t be radioactive) I found these 4 AM3 motherboard with "optional" ECC memory support. I don''t know whether this means ECC works, or ECC memory can be used but ECC will not. Do you? Asus M4N78 SE, Nvidia nForce 720D Chipset, 4xsata Asus M4N78-VM, Nvidia GeForce 8200 Chipset, 6xsata, onboard video Asus M4N82 Deluxe, NVIDIA nForce 980a Chipset, 6xsata Gigabyte GA-MA770T-UD3P, AMD 770 Chipset, 6xsata The 2nd one looks the most promising, and GeForce 8200 seems somewhat supported by solaris except for sound(don''t care) and network (can add another card. A workaround is described in http://pegolon.wordpress.com/2009/07/13/setting-up-my-solaris-fileserver-part-1/ but i''d be clueless if anything goes wrong). What do you think? -- This message posted from opensolaris.org
Keith Bierman
2009-Jul-21 04:59 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
> hopefully the lead itself won''t be >radioactive)Or the chips themselves don''t have some alpha particle generation. It has happened and from premium vendors There is no replacement for good system design :) khbkhb at gmail.com Sent from my iPod
Kyle McDonald
2009-Jul-21 15:19 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
chris wrote:> Thanks for your reply. > What if I wrap the ram in a sheet of lead? ;-) > (hopefully the lead itself won''t be radioactive) > >I''ve been looking at the same thing recently.> I found these 4 AM3 motherboard with "optional" ECC memory support. I don''t know whether this means ECC works, or ECC memory can be used but ECC will not. Do you? > >That''s a good question. The ASUS specs definitely say unbuffered ECC memory is compatible, but until you mentioned it I never thought about whether the ECC functionality would actually be used.> Asus M4N78 SE, Nvidia nForce 720D Chipset, 4xsata > Asus M4N78-VM, Nvidia GeForce 8200 Chipset, 6xsata, onboard video > Asus M4N82 Deluxe, NVIDIA nForce 980a Chipset, 6xsata > Gigabyte GA-MA770T-UD3P, AMD 770 Chipset, 6xsata >I hadn''t located the Gigabyte board yet I''ll have to look at that. The ASUS boards with the AMD chipsets (the models that start with M4A - like the M4A79T) are all true AM3 boards - they take DDR3 memory. All the nVidia chipset boards (even the 980a one) are AM2+/AM3 boards, and (as far as I know) only take DDR2 memory, but that may not matter to you since this will only be a server for you. The chipset isn''t supposed to dictate the memory type that up to the CPU, but the MB does need to support it in other ways. DDR3 doesn''t appear (in any reviews I''ve seen) to give much benefits with the current processors anyway. What I find more discouraging (since I''m trying to build a desktop/workstation) is that when you go to look for RAM the only ECC memory available (doesn''t matter if it''s DDR2 or 3) is rated much slower than what is available for non-ECC. For example you can find DD2 at 1066mhz, or even 1200mhz, but the fasted ECC DDR2 you can get is 800mhz. - It''s cheap though, unless you want 4GB DIMMs then it''s outrageous!> The 2nd one looks the most promising, and GeForce 8200 seems somewhat supported by solaris except for sound(don''t care) and network (can add another card.I don''t see the the 1st or the 2nd one at usa.asus.com. The 3rd is the one I''ve been considering hard lately. In my searching the other brands don''t seem to support ECC memeory at all. Another thing to remember is the expansion slots. You mentioned putting in a SATA controller for more drives, You''ll want to make sure the board has a slot that can handle the card you want. If you''re not using graphics then any board with a single PCI-E x16 slot should handle anything. But if you do put in a graphics board you''ll want to look at what other slots are available. Not many consumer boards have PCI-X slots, and only some have PCI-E x4 slots. PCI-E x1 slots are getting scarce too. Most of the PCI-E SATA controlers I''ve seen want a slot at least x4, and many are x8. -Kye
Joseph L. Casale
2009-Jul-21 15:48 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
>Another thing to remember is the expansion slots. You mentioned putting >in a SATA controller for more drives, You''ll want to make sure the board >has a slot that can handle the card you want. If you''re not using >graphics then any board with a single PCI-E x16 slot should handle >anything. But if you do put in a graphics board you''ll want to look at >what other slots are available. Not many consumer boards have PCI-X >slots, and only some have PCI-E x4 slots. PCI-E x1 slots are getting >scarce too. Most of the PCI-E SATA controlers I''ve seen want a slot at >least x4, and many are x8.Better check that, almost *all* consumer boards that have 1 16 lane PCIe slot can only have use a graphics card in that slot. I can confirm this to be true and most Intel boards are that way and some Asus boards I have used behave this way as well. As far as ECC for a home system, I run two ESXi servers, an asterisk PBX, a red hat iSCSI server etc etc all on commodity mobo''s without ECC and have perfect uptime. I wouldn''t do this at work, but for the 1/2 dozen people that use it _at home_, it works perfectly. jlc
Kyle McDonald
2009-Jul-21 16:14 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
Joseph L. Casale wrote:>> Another thing to remember is the expansion slots. You mentioned putting >> in a SATA controller for more drives, You''ll want to make sure the board >> has a slot that can handle the card you want. If you''re not using >> graphics then any board with a single PCI-E x16 slot should handle >> anything. But if you do put in a graphics board you''ll want to look at >> what other slots are available. Not many consumer boards have PCI-X >> slots, and only some have PCI-E x4 slots. PCI-E x1 slots are getting >> scarce too. Most of the PCI-E SATA controlers I''ve seen want a slot at >> least x4, and many are x8. >> > > Better check that, almost *all* consumer boards that have 1 16 lane PCIe > slot can only have use a graphics card in that slot. I can confirm this > to be true and most Intel boards are that way and some Asus boards I have > used behave this way as well. >Really? That''s good to know. Is that only for the first x16 slot? or (for MB that have them) all the x16 slots? I wonder how it knows if it''s a video card... hmm. That makes the prospects of adding more SATA ports even harder. All the 8 port cards I''ve seen are either PCI-X or PCI-E, and many are physically x8 or more, even if they''ll run with fewer lanes.> As far as ECC for a home system, I run two ESXi servers, an asterisk PBX, > a red hat iSCSI server etc etc all on commodity mobo''s without ECC and > have perfect uptime. I wouldn''t do this at work, but for the 1/2 dozen people > that use it _at home_, it works perfectly. >I know it might be over kill. But I have other reasons to liek the MB, and if it can do ECC, and the memory doesn''t cost more (and with 2GB DIMMs it doesn''t) I figured why not. Though it is slower. Everything is a trade off. -Kyle> jlc > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Nathan Fiedler
2009-Jul-21 17:42 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
Regarding the SATA card and the mainboard slots, make sure that whatever you get is compatible with the OS. In my case I chose OpenSolaris which lacks support for Promise SATA cards. As a result, my choices were very limited since I had chosen a Chenbro ES34069 case and Intel Little Falls 2 mainboard. Basically I had to go with the SYBA Sil3124 card and a flexible PCI adapter. More details here: http://cafenate.wordpress.com/2009/07/13/building-a-nas-box/ No ECC memory, but I don''t mind because the case has a great form factor and hot swappable drive bays. If I could find a low power board that supported ECC and OpenSolaris, I''d consider switching. Good luck. n
Nicholas Lee
2009-Jul-22 04:53 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
On Tue, Jul 21, 2009 at 4:20 PM, chris <no-reply at opensolaris.org> wrote:> Thanks for your reply. > What if I wrap the ram in a sheet of lead? ;-) > (hopefully the lead itself won''t be radioactive) > > I found these 4 AM3 motherboard with "optional" ECC memory support. I don''t > know whether this means ECC works, or ECC memory can be used but ECC will > not. Do you? >Often this means, ECC memory will work but the ECC aspect will not work. So the memory is usable, but not as you expect. Nicholas -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090722/6f51a82d/attachment.html>
Nicholas Lee
2009-Jul-22 04:54 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
The i7 and Xeon 3300 m/b that say they have ECC support have exactly this problem as well. On Wed, Jul 22, 2009 at 4:53 PM, Nicholas Lee <emptysands at gmail.com> wrote:> > > On Tue, Jul 21, 2009 at 4:20 PM, chris <no-reply at opensolaris.org> wrote: > >> Thanks for your reply. >> What if I wrap the ram in a sheet of lead? ;-) >> (hopefully the lead itself won''t be radioactive) >> >> I found these 4 AM3 motherboard with "optional" ECC memory support. I >> don''t know whether this means ECC works, or ECC memory can be used but ECC >> will not. Do you? >> > > Often this means, ECC memory will work but the ECC aspect will not work. So > the memory is usable, but not as you expect. > > Nicholas >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090722/f0fd0c04/attachment.html>
i7 doesn''t support ECC even motherboard supports it, you need XEON W3500 which costs the same as i7 to support ECC. -- This message posted from opensolaris.org
Good news; the manual for the M4N78-VM mentions ECC and gives the following BIOS options: disabled/basic/good/super/maxi/user. Unsure what these mean but that''s a start. -- This message posted from opensolaris.org
Found this: ECC Mode [Disabled] Disables or sets the DRAM ECC mode that allows the hardware to report and correct memory errors. Set this item to [Basic] [Good] or [Max] to allow ECC mode auto-adjustment. Set this item to [Super] to adjust the DRAM BG Scrub sub-item manually. You may also adjust all sub-items by setting this item to [User]. Configuration options: [Disabled] [Basic] [Good] [Super] [Max] [User] I would have thought the checksum was either good or not. Apparently it''s not so simple. Now about that unique PCIe-16 slot? -- This message posted from opensolaris.org
F. Wessels
2009-Jul-23 12:42 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
Hi, I''m using asus m3a78 boards (with the sb700) for opensolaris and m2a* boards (with the sb600) for linux some of them with 4*1GB and others with 4*2Gb ECC memory. Ecc faults will be detected and reported. I tested it with a small tungsten light. By moving the light source slowly towards the memory banks you''ll heat them up in a controlled way and at a certain point bit flips will occur. I recommend you to go for a m4a board since they support up to 16 GB. I don''t know if you can run opensolaris without a videocard after installation I think you can disable the "halt on no video card" in the bios. But Simon Breden had some trouble with it, see his homeserver blog. But you can go for one of the three m4a boards with a 780g onboard. Those will give you 2 pci-e x16 connectors. I don''t think the onboard nic is supported. I always put an intel (e1000) in, just to prevent any trouble. I don''t have any trouble with the sb700 in ahci mode. Hotplugging works like a charm. Transfering a couple of GB''s over esata takes considerable less time than via usb. I have a pata to dual cf adapter and two industrial 16gb cf cards as mirrored root pool. It takes for ever to install nevada, at least 14 hours. I suspect the cf cards lack caches. But I don''t update that regularly, still on snv104. And have 2 mirrors and a hot spare. The sixth port is an esata port I use to transfer large amounts of data. This system consumes about 73 watts idle and 82 under load i/o load. (5 disks , a separate nic ,8 gb ram and a be2400 all using just 73 watts!!!) Please note that frequency scaling is only supported on the K10 architecture. But don''t expect to much power saving from it. A lower voltage yields far greater savings than a lower frequency. In september I''ll do a post about the afore mentioned M4A boards and an lsi sas controller in one of the pcie x16 slots. -- This message posted from opensolaris.org
Richard Elling
2009-Jul-23 16:19 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
On Jul 23, 2009, at 5:42 AM, F. Wessels wrote:> Hi, > > I''m using asus m3a78 boards (with the sb700) for opensolaris and > m2a* boards (with the sb600) for linux some of them with 4*1GB and > others with 4*2Gb ECC memory. Ecc faults will be detected and > reported. I tested it with a small tungsten light. By moving the > light source slowly towards the memory banks you''ll heat them up in > a controlled way and at a certain point bit flips will occur.I am impressed! I don''t know very many people interested in inducing errors in their garage. This is an excellent way to demonstrate random DRAM errors. Well done!> I recommend you to go for a m4a board since they support up to 16 GB. > I don''t know if you can run opensolaris without a videocard after > installation I think you can disable the "halt on no video card" in > the bios. But Simon Breden had some trouble with it, see his > homeserver blog. But you can go for one of the three m4a boards with > a 780g onboard. Those will give you 2 pci-e x16 connectors. I don''t > think the onboard nic is supported. I always put an intel (e1000) > in, just to prevent any trouble. I don''t have any trouble with the > sb700 in ahci mode. Hotplugging works like a charm. Transfering a > couple of GB''s over esata takes considerable less time than via usb. > I have a pata to dual cf adapter and two industrial 16gb cf cards as > mirrored root pool. It takes for ever to install nevada, at least 14 > hours. I suspect the cf cards lack caches. But I don''t update that > regularly, still on snv104. And have 2 mirrors and a hot spare. The > sixth port is an esata port I use to transfer large amounts of data. > This system consumes about 73 watts idle and 82 under load i/o load. > (5 disks , a separate nic ,8 gb ram and a be2400 all using just 73 > watts!!!)How much power does the tungsten light burn? :-) -- richard
Neal Pollack
2009-Jul-23 18:35 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
On 07/23/09 09:19 AM, Richard Elling wrote:> On Jul 23, 2009, at 5:42 AM, F. Wessels wrote: > >> Hi, >> >> I''m using asus m3a78 boards (with the sb700) for opensolaris and m2a* >> boards (with the sb600) for linux some of them with 4*1GB and others >> with 4*2Gb ECC memory. Ecc faults will be detected and reported. I >> tested it with a small tungsten light. By moving the light source >> slowly towards the memory banks you''ll heat them up in a controlled >> way and at a certain point bit flips will occur. > > I am impressed! I don''t know very many people interested in inducing > errors in their garage. This is an excellent way to demonstrate random > DRAM errors. Well done! > >> I recommend you to go for a m4a board since they support up to 16 GB. >> I don''t know if you can run opensolaris without a videocard after >> installation I think you can disable the "halt on no video card" in >> the bios. But Simon Breden had some trouble with it, see his >> homeserver blog. But you can go for one of the three m4a boards with >> a 780g onboard. Those will give you 2 pci-e x16 connectors. I don''t >> think the onboard nic is supported.What is the specific model of the onboard nic chip? We may be working on it right now. Neal>> I always put an intel (e1000) in, just to prevent any trouble. I >> don''t have any trouble with the sb700 in ahci mode. Hotplugging works >> like a charm. Transfering a couple of GB''s over esata takes >> considerable less time than via usb. >> I have a pata to dual cf adapter and two industrial 16gb cf cards as >> mirrored root pool. It takes for ever to install nevada, at least 14 >> hours. I suspect the cf cards lack caches. But I don''t update that >> regularly, still on snv104. And have 2 mirrors and a hot spare. The >> sixth port is an esata port I use to transfer large amounts of data. >> This system consumes about 73 watts idle and 82 under load i/o load. >> (5 disks , a separate nic ,8 gb ram and a be2400 all using just 73 >> watts!!!) > > How much power does the tungsten light burn? :-) > -- richard > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks for this, good news! Yes, I would try to use onboard video.> Please note that frequency scaling is only supported > on the K10 architecture. But don''t expect to much > power saving from it. A lower voltage yields far > greater savings than a lower frequency.Doesn''t Cool''n''quiet step the voltage as well? An Athlon X2 5050e and an Athlon II X2 250 are the same price. The former has a TDP of 45W, while the latter is 65W. But the 250 uses 45nm technology and the K10 architecture, so I would hope that its power consumption at idle would be lower. Would you agree? Also, out of interest, do you know what the ECC BIOS modes mean? -- This message posted from opensolaris.org
The Asus M4N78-VM uses a Nvidia GeForce 8200 Chipset (This board only has 1 PCIe-16 slot though, I should look at those that have 2 slots). -- This message posted from opensolaris.org
Oh, and another unrelated question: Would I better off using OpenSolaris or Solaris Community Edition? I suspect SCE has more drivers (though mayby in a more beta state?), but its huge download size (several days in backward New Zealand, thanks Telecom NZ!) means I would only try if there is good justification. What would you guys recommend (I know, this is an OpenSolaris forum, but at least can you tell me how these 2 differ)? -- This message posted from opensolaris.org
Miles Nordin
2009-Jul-23 22:23 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
>>>>> "c" == chris <no-reply at opensolaris.org> writes:c> do you know what the ECC BIOS modes mean? It''s about the hardware scrubbing feature I mentioned. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090723/7ff6bb8e/attachment.bin>
Erik Trimble
2009-Jul-23 23:02 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
I''m going the other route here, and using a Intel small server motherboard. I''m currently trying the Supermicro X7SBE, which supports a non-Xeon CPU, and _should_ actually use the (unbuffered) ECC RAM I have in it. It can also support a network KVM IPMI board, which is nice (though not cheap - i.e. $100 or so). The Supermicro X7SBL-LN[12] boards also look good, though they won''t support the network KVM option. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
Haudy Kazemi
2009-Jul-23 23:18 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
chris wrote:> Ok, so the choice for a MB boils down to: > > - Intel desktop MB, no ECC support >This is mostly true. The exceptions are some implementations of the Socket T LGA 775 (i.e. late Pentium 4 series, and Core 2) D975X and X38 chipsets, and possibly some X48 boards as well. Intel''s other desktop chipsets do not support ECC. Some motherboard examples include: Intel DX38BT - ECC support is mentioned in the documentation and is a BIOS option Gigabyte GA-X38-DS4, GA-EX38-DS4 - ECC support is mentioned in the documentation and is listed in the website FAQ The Sun Ultra 24 also uses the X38 chipset. It''s not clear how well ECC support has actually been implemented on the Intel and Gigabyte boards, i.e. whether it is simply unbuffered ECC memory compatible, or actually able to initialize and use the ECC capability. I mentioned the X48 chipset above because discussions surrounding it say it is just a higher binned X38 chip. On Linux, the EDAC project maintains software to manage the motherboard''s ECC capability. A list of memory controllers currently supported by Linux EDAC is here: http://buttersideup.com/edacwiki/Main_Page A prior discussion thread in ''fm'' titled ''X38/975x ECC memory support'' is here: http://opensolaris.org/jive/thread.jspa?threadID=52440&tstart=60 Thread links: http://www.madore.org/~david/linux/#ECC_for_82x http://developmentonsolaris.wordpress.com/2008/03/12/intel-82975x-mch-and-logging-of-ecc-events-on-solaris/ Note that the ''ecccheck.pl'' script depends on the ''pcitweak'' utility which is no longer present in OpenSolaris 2009.06 and Ubuntu 8.10 because of Xorg changes. One Linux user needing the utility copied it from another distro. The version of pcitweak included with previous versions of OpenSolaris might work on 2009.06. http://opensolaris.org/jive/thread.jspa?threadID=105975&tstart=90 http://ubuntuforums.org/showthread.php?t=1054516 Finally, on unbuffered ECC memory prices and speeds...they are a bit behind in price and speed vs. regular unbuffered RAM, but both are still reasonable. Keep When comparing prices, remember that ECC RAM uses 9 chips where non-ECC uses 8, so expect at least a 12.5% price increase. Consider: DDR2: $64 for Crucial 4GB kit (2GBx2), 240-pin DIMM, Unbuffered DDR2 PC2-6400 memory module http://www.crucial.com/store/partspecs.aspx?IMODULE=CT2KIT25672AA800 DDR3: $108 for Crucial 6GB (3 x 2GB) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333 (PC3 10600) Triple Channel Kit Server Memory Model CT3KIT25672BA1339 - Retail http://www.newegg.com/Product/Product.aspx?Item=N82E16820148259 -hk> - Intel server MB, ECC support, expensive (requires a Xeon for speedstep support). It is a shame to waste top kit doing nothing 24/7. > - AMD K8: ECC support(right?), no Cool''n''quiet support (but maybe still cool enough with the right CPU?) > - AMD K10: should have the best all of both worlds: ECC support, Cool''n''quiet, cheap-ish and lowish-power CPU like Athlon II 250 > > Is my understanding correct? Like many I want reliable, cheap, low power, ECC supporting MB. Integrated video and low power chipset would be best. The sata ports will have to come from an additional controller it seems, but that''s life. > > Intel gear is best supported, but they shoot themselves (or is that that us?) in the foot with their ECC-on-server MB policy. > > AMD K10 seems the most tempting as it has it all. I wonder about solaris support though. For example, is an AM3 MB OK with solaris? > > I''d like this hopefully to work right away with opensolaris 2009.06, without fiddling with drivers, I dont have much time or skills. > > What AM3 MB do you guys know that is trouble free with solaris? > > If none, maybe top quality ram (suggestions?) would allow me to forego ECC and use a well supported low power intel board (suggestions?) instead? and a E5200? > > Thanks for your insight. >
Cheers Miles, and thanks also for the tip to look in the BIOS options to see if ECC is actually used. Which mode woud you use? Max seems the most appealing, why would anyone use something called basic? But there must be a catch if they provided several ECC support modes. I am glad this thread seems to be going somewhere with lots of valuable contributions =:^) -- This message posted from opensolaris.org
More choice is good! It seems Intel''s server boards sometimes accept desktop CPUS, but don''t support speedstep. Is all OK with those? -- This message posted from opensolaris.org
>Note that the ''ecccheck.pl'' script depends on the ''pcitweak'' utility >which is no longer present in OpenSolaris 2009.06 and Ubuntu 8.10 >because of Xorg changes.This is exactly the kind of hidden trap I fear. One does everything right and then discovers that xx is missing or has been changed or depends on yy or doesn''t work with zz. And that discovery comes after hours/days/weeks of trying to find out why something misbehaves. Thanks for the heads up! 2008.11 would be a safer bet then? Or Solaris CE? -- This message posted from opensolaris.org
Miles Nordin
2009-Jul-24 19:29 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
>>>>> "c" == chris <no-reply at opensolaris.org> writes: >>>>> "hk" == Haudy Kazemi <kaze0010 at umn.edu> writes:c> why would anyone use something called basic? But there must be c> a catch if they provided several ECC support modes. They are just taiwanese. They have no clue wtf they are doing and do not care about quality since their customers don''t have any memory past six months, and anyway if they get a bad reputation they''ll just sell the same crap under a different brand name. They are just trying to ship kit for gamers as quickly as possible. They view the design of their box and website and little bubbly slogan blobs as their key distinguishing asset over their competitors, not what''s inside. Do not try to reason with these people who have zero respect for you, and certainly do not trust them to do something ``reasonable'''' and then try to work out what they''ve not documented based on blind faith in an orderly world with competent stewardship. Read jaakko''s script that I posted and set the timer to scrub the amount of memory you have about once a day. The script will give you status, showing the current address being scrubbed, so you can watch the rate at which the counter increases, and also watch the counter wrap around to determine the number of zeroes it has hacked off from the size of your memory. With a couple observations spaced 10min apart, plus a series of observations each spaced 4 hours apart, you can convert the microseconds you feed to the script into hours-per-complete-pass and accomplish this. If you really care that much. I didn''t---just made sure it wasn''t crazy. It isn''t really that important anyway---you should not think about it so much. I jsut blundered through it. I''m spending more time writign about it than doing it. it is just a bunch of toy knobs for you to play with. The important thing is, will it actually correct errors? will the OS count the errors? will it localize them to a DIMM? since the L2 caches are 10x the size they used to be and etched smaller/more-sensitive will ECC also work on the on-chip cache and get counted and reported distinct from DIMM''s, or not? All of these are more important than whether it does any scrubbing at all, much less the specific timing of the scrubbing. I bought four different boards on purpose to get a cross-section of crappyness, and the arena is incredibly stabby. There is all kinds of stuff in the BSoS like DIMM powerdown and C1E support that probably doesn''t work at all. One of these nvidia-northbridge boards, the audio controller showed up on a different interrupt every time I booted it. Some other board, you needed an actual physical floppy disk to update the BIOS---pxeboot/memdisk just froze, even though it works for everything else I''ve tried. Who even HAS a floppy disk? Don''t get distracted playing with their stupid fisher price knobs like an overclocker. Just flip the ECC thing on and forget about it. What jaakko demonstrates is that support for AMD ECC probably belongs in the OS or bootloader anyway, since it''s entirely in the CPU and not integrator-specific and the people who write OS''s are less idiotic than these ship-and-forget BIOS people and besides the error reporting needs to be in the OS anyway, which is a more complicated piece, so relying on someone else to ``turn it on'''' for you when turning it on boils down to a couple setpci lines in a shell script is completely retarded. hk> DDR3: $108 for Crucial 6GB (3 x 2GB) 240-Pin DDR3 SDRAM ECC yeah, but for 4GB parts it''s not a 12% difference any more. It''s raep. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090724/031e6356/attachment.bin>
Constantin Gonzalez
2009-Jul-29 10:18 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
Hi, thank you so much for this post. This is exactly what I was looking for. I''ve been eyeing the M3A76-CM board, but will now look at 78 and M4A as well. Actually, not that many Asus M3A, let alone M4A boards show up yet on the OpenSolaris HCL, so I''d like to encourage everyone to share their hardware experience by clicking on the "submit hardware" link on: http://www.sun.com/bigadmin/hcl/data/os/ I''ve done it a couple of times and it''s really just a matter of 5-10 minutes where you can help others know if a certain component works or not or if a special driver or /etc/driver_aliases setting is required. I''m also interested in getting the power down. Right now, I have the Athlon X2 5050e (45W TDP) on my list, but I''d also like to know more about the possibilities of the Athlon II X2 250 and whether it has better potential for power savings. Neal, the M3A78 seems to have a RealTek RTL8111/8168B NIC chip. I pulled this off a Gentoo Wiki, because strangely this information doesn''t show up on the Asus website. Also, thanks for the CF to pata hint for the root pool mirror. Will try to find fast CFs to boot from. The performance problems you see when writing may be related to master/slave issues, but I''m not a good PC tweaker to back that up. Cheers, Constantin F. Wessels wrote:> Hi, > > I''m using asus m3a78 boards (with the sb700) for opensolaris and m2a* boards > (with the sb600) for linux some of them with 4*1GB and others with 4*2Gb ECC > memory. Ecc faults will be detected and reported. I tested it with a small > tungsten light. By moving the light source slowly towards the memory banks > you''ll heat them up in a controlled way and at a certain point bit flips will > occur. I recommend you to go for a m4a board since they support up to 16 GB. > I don''t know if you can run opensolaris without a videocard after > installation I think you can disable the "halt on no video card" in the bios. > But Simon Breden had some trouble with it, see his homeserver blog. But you > can go for one of the three m4a boards with a 780g onboard. Those will give > you 2 pci-e x16 connectors. I don''t think the onboard nic is supported. I > always put an intel (e1000) in, just to prevent any trouble. I don''t have any > trouble with the sb700 in ahci mode. Hotplugging works like a charm. > Transfering a couple of GB''s over esata takes considerable less time than via > usb. I have a pata to dual cf adapter and two industrial 16gb cf cards as > mirrored root pool. It takes for ever to install nevada, at least 14 hours. I > suspect the cf cards lack caches. But I don''t update that regularly, still on > snv104. And have 2 mirrors and a hot spare. The sixth port is an esata port > I use to transfer large amounts of data. This system consumes about 73 watts > idle and 82 under load i/o load. (5 disks , a separate nic ,8 gb ram and a > be2400 all using just 73 watts!!!) Please note that frequency scaling is only > supported on the K10 architecture. But don''t expect to much power saving from > it. A lower voltage yields far greater savings than a lower frequency. In > september I''ll do a post about the afore mentioned M4A boards and an lsi sas > controller in one of the pcie x16 slots.-- Constantin Gonzalez Sun Microsystems GmbH, Germany Principal Field Technologist http://blogs.sun.com/constantin Tel.: +49 89/4 60 08-25 91 http://google.com/search?q=constantin+gonzalez Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten Amtsgericht Muenchen: HRB 161028 Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Wolf Frenkel Vorsitzender des Aufsichtsrates: Martin Haering
Ok, i am ready to try. 2 last questions before I go for it: - which version of (open)solaris for Ecc support (which seems to have been dropped from 200906) and general as-few-headaches-as-possible installation? - do you think this issue with the AMD Athlon II X2 250 http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3572&p=2&cp=4 would affect cool''n''quiet support in solaris? thx for your insight. -- This message posted from opensolaris.org
Karel Gardas
2009-Sep-03 09:57 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
Hello, your "(open)solaris for Ecc support (which seems to have been dropped from 200906)" is misunderstanding. OS 2009.06 also supports ECC as 2005 did. Just install it and use my updated ecccheck.pl script to get informed about errors. Also you might verify that Solaris'' memory scrubber is really running if you are that curious: http://developmentonsolaris.wordpress.com/2009/03/06/how-to-make-sure-memory-scrubber-is-running/ Karel -- This message posted from opensolaris.org
On Thu, Sep 3, 2009 at 4:57 AM, Karel Gardas <karel.gardas at centrum.cz>wrote:> Hello, > your "(open)solaris for Ecc support (which seems to have been dropped from > 200906)" is misunderstanding. OS 2009.06 also supports ECC as 2005 did. Just > install it and use my updated ecccheck.pl script to get informed about > errors. Also you might verify that Solaris'' memory scrubber is really > running if you are that curious: > http://developmentonsolaris.wordpress.com/2009/03/06/how-to-make-sure-memory-scrubber-is-running/ > Karel > -- >Is there something that needs to be done on the solaris side for memscrub scans to occur? I''m running snv_118, with a supermicro board running ECC memory and amd opteron CPU''s. It would appear it''s doing a lot of nothing. Aug 8 03:56:23 fserv unix: [ID 950921 kern.info] cpu0: x86 (chipid 0x0 AuthenticAMD 40F13 family 15 model 65 step 3 clock 2010 MHz) Aug 8 03:56:23 fserv unix: [ID 950921 kern.info] cpu0: Dual-Core AMD Opteron(tm) Processor 2212 root at fserv:~# isainfo -v 64-bit amd64 applications tscp ahf cx16 sse3 sse2 sse fxsr amd_3dnowx amd_3dnow amd_mmx mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications tscp ahf cx16 sse3 sse2 sse fxsr amd_3dnowx amd_3dnow amd_mmx mmx cmov amd_sysc cx8 tsc fpu root at fserv:~# echo "memscrub_scans_done/U" | mdb -k memscrub_scans_done: memscrub_scans_done: 0 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090904/d6b8f3b7/attachment.html>
Brandon High
2009-Sep-06 01:42 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
On Fri, Sep 4, 2009 at 7:14 PM, Tim Cook<tim at cook.ms> wrote:> Is there something that needs to be done on the solaris side for memscrub > scans to occur?? I''m running snv_118, with a supermicro board running ECC > memory and amd opteron CPU''s.? It would appear it''s doing a lot of nothing.My AMD board from ASUS has a BIOS option to scrub memory, outside of the OS. Check that? -B -- Brandon High : bhigh at freaks.com
Karel Gardas
2009-Sep-07 07:01 UTC
[zfs-discuss] Motherboard for home zfs/solaris file server
What''s your uptime? Usually it scrubs memory during the idle time and usually waits quite a long nearly till the deadline -- which is IIRC 12 hours. So do you have more than 12 hours of uptime? -- This message posted from opensolaris.org
On Mon, Sep 7, 2009 at 2:01 AM, Karel Gardas <karel.gardas at centrum.cz>wrote:> What''s your uptime? Usually it scrubs memory during the idle time and > usually waits quite a long nearly till the deadline -- which is IIRC 12 > hours. So do you have more than 12 hours of uptime? > -- >10:43am up 30 days 6:47, 1 user, load average: 0.00, 0.00, 0.00 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090907/0b60b474/attachment.html>