I had recently started setting up a homegrown OpenSolaris NAS with a large RAIDZ2 pool, and had found its RAIDZ2 performance severely lacking - more like downright atrocious. As originally set up: * Asus M4A785-M motherboard * Phenom II X2 550 Black CPU * JMB363-based PCIe X1 SATA card (2 ports) * SII3132-based PCIe X1 SATA card (2 ports) * Six on-board SATA ports Two 500 GB drives (one Seagate, one WD) serve as the root pool, and have performed admirably. The other eight 500 GB drives (4 Seagate, 4 WD, in a RAIDZ2 configuration) performed quite poorly, with lots of long freezeups and no error messages. Even streaming a 48 kHz/24-bit FLAC via CIFS would occasionally freeze for 5-10 seconds, with no other load on the file server. Such freezeups became far more likely with other activity - forget about streaming video if a scrub was going on, for instance. These pauses were NOT accompanied by any CPU activity. If I watched what the array was doing using GKrellM, I could see the pauses. I started to get the feeling that I was running into a bad I/O bottleneck. I don''t know how many PCIe lanes are being used by the onboard ports, and I''m now of the opinion that two-port PCIe X1 SATA cards are a Very Bad Idea for OpenSolaris. Today, I replaced the motley assortment of controllers with an Intel SASUC8I to handle the RAIDZ2 array, leaving the root pool on two of the onboard ports. Having already had a heart-attack moment last week after rearranging drives, *this* time I knew to do a "zpool export" before powering the system down. :O The card worked out-of-the-box, with no extra configuration required. WOW, what a difference! I tried a minor stress-test: viewing some 720p HD video on one system via NFS, while streaming music via CIFS to my XP desktop. Not a single pause or stutter - smooth as silk. Just for kicks, I upped the ante and started a scrub on the RAIDZ2. No problem! Finally, it works like it should! The scrub is going about twice as fast overall, with none of the herky-jerky action I was getting using the mix-and-match SATA interfaces. Interestingly about the SASUC8I - the name "Intel" doesn''t occur anywhere on the card. It''s basically a repackaged LSI SAS3081E-R card (it''s even labeled as such on the card itself and on the antistatic bag), and came just as a card in a box with an additional low-profile bracket for those with 1U cases - no driver CD or cables. I knew that it didn''t come with cables, and ordered them separately. If I had ordered the LSI kit with cables from the same supplier, it would have cost about $80 more than getting the SASUC8I and cables separately. If you''re building a NAS, and have a PCIe X8 or X16 slot handy, this card is well worth it. Leave the two-port cheapies for workstations. -- This message posted from opensolaris.org
Hi, Thank you for sharing it. Seems like it''s more cheaper than the HBA from LSI, isn''t it? Can you tell us the build version of the opensolaris? best regards, hanzhu On Fri, Mar 12, 2010 at 8:52 AM, Russ Price <rjp_sun at fubegra.net> wrote:> I had recently started setting up a homegrown OpenSolaris NAS with a large > RAIDZ2 pool, and had found its RAIDZ2 performance severely lacking - more > like downright atrocious. As originally set up: > > * Asus M4A785-M motherboard > * Phenom II X2 550 Black CPU > * JMB363-based PCIe X1 SATA card (2 ports) > * SII3132-based PCIe X1 SATA card (2 ports) > * Six on-board SATA ports > > Two 500 GB drives (one Seagate, one WD) serve as the root pool, and have > performed admirably. The other eight 500 GB drives (4 Seagate, 4 WD, in a > RAIDZ2 configuration) performed quite poorly, with lots of long freezeups > and no error messages. Even streaming a 48 kHz/24-bit FLAC via CIFS would > occasionally freeze for 5-10 seconds, with no other load on the file server. > Such freezeups became far more likely with other activity - forget about > streaming video if a scrub was going on, for instance. These pauses were NOT > accompanied by any CPU activity. If I watched what the array was doing using > GKrellM, I could see the pauses. > > I started to get the feeling that I was running into a bad I/O bottleneck. > I don''t know how many PCIe lanes are being used by the onboard ports, and > I''m now of the opinion that two-port PCIe X1 SATA cards are a Very Bad Idea > for OpenSolaris. Today, I replaced the motley assortment of controllers with > an Intel SASUC8I to handle the RAIDZ2 array, leaving the root pool on two of > the onboard ports. Having already had a heart-attack moment last week after > rearranging drives, *this* time I knew to do a "zpool export" before > powering the system down. :O > > The card worked out-of-the-box, with no extra configuration required. WOW, > what a difference! I tried a minor stress-test: viewing some 720p HD video > on one system via NFS, while streaming music via CIFS to my XP desktop. Not > a single pause or stutter - smooth as silk. Just for kicks, I upped the ante > and started a scrub on the RAIDZ2. No problem! Finally, it works like it > should! > > The scrub is going about twice as fast overall, with none of the > herky-jerky action I was getting using the mix-and-match SATA interfaces. > > Interestingly about the SASUC8I - the name "Intel" doesn''t occur anywhere > on the card. It''s basically a repackaged LSI SAS3081E-R card (it''s even > labeled as such on the card itself and on the antistatic bag), and came just > as a card in a box with an additional low-profile bracket for those with 1U > cases - no driver CD or cables. I knew that it didn''t come with cables, and > ordered them separately. If I had ordered the LSI kit with cables from the > same supplier, it would have cost about $80 more than getting the SASUC8I > and cables separately. > > If you''re building a NAS, and have a PCIe X8 or X16 slot handy, this card > is well worth it. Leave the two-port cheapies for workstations. > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100312/926d8307/attachment.html>
Glad you got it humming! I got my (2x) 8 port LSI cards from here for $130USD... http://cgi.ebay.com/BRAND-NEW-SUPERMICRO-AOC-USASLP-L8I-UIO-SAS-RAID_W0QQitemZ280397639429QQcmdZViewItemQQptZLH_DefaultDomain_0?hash=item4149006f05 Works perfectly. -- This message posted from opensolaris.org
> Can you tell us the build > version of the opensolaris?I''m currently on b134 (but I had the performance issues with 2009.06, b130, b131, b132, and b133 as well). I may end up swapping the Phenom II X2 550 with an Athlon II X4 630 that I''ve put into another M4A785-M system. I noticed that the eight-disk scrub came close to maxing out both cores of the Phenom - so it wouldn''t hurt to have a couple more cores in place of the extra cache and clock speed. However, even with the CPU nearly pegged, it was still serving files smoothly - vastly better than the rag-tag controller assortment. If you''re going to run a big array on OpenSolaris, it''s a good idea to use a real HBA instead of consumer-grade interfaces. :o) The nice thing about the SASUC8I / SAS3081E-R is that, by default, it presents the array to the operating system as individual drives - perfect for ZFS. scrub: scrub completed after 1h33m with 0 errors on Thu Mar 11 19:17:01 2010 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 c11t1d0p1 ONLINE 0 0 0 c11t7d0p1 ONLINE 0 0 0 c11t6d0p1 ONLINE 0 0 0 c11t4d0p1 ONLINE 0 0 0 c11t0d0p1 ONLINE 0 0 0 c11t5d0p1 ONLINE 0 0 0 c11t2d0p1 ONLINE 0 0 0 c11t3d0p1 ONLINE 0 0 0 errors: No known data errors -- This message posted from opensolaris.org
Russ Price wrote:>> Can you tell us the build >> version of the opensolaris? >> > > I''m currently on b134 (but I had the performance issues with 2009.06, b130, b131, b132, and b133 as well). > > I may end up swapping the Phenom II X2 550 with an Athlon II X4 630 that I''ve put into another M4A785-M system. I noticed that the eight-disk scrub came close to maxing out both cores of the Phenom - so it wouldn''t hurt to have a couple more cores in place of the extra cache and clock speed. However, even with the CPU nearly pegged, it was still serving files smoothly - vastly better than the rag-tag controller assortment. > > If you''re going to run a big array on OpenSolaris, it''s a good idea to use a real HBA instead of consumer-grade interfaces. :o) The nice thing about the SASUC8I / SAS3081E-R is that, by default, it presents the array to the operating system as individual drives - perfect for ZFS. > > scrub: scrub completed after 1h33m with 0 errors on Thu Mar 11 19:17:01 2010 > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > raidz2-0 ONLINE 0 0 0 > c11t1d0p1 ONLINE 0 0 0 > c11t7d0p1 ONLINE 0 0 0 > c11t6d0p1 ONLINE 0 0 0 > c11t4d0p1 ONLINE 0 0 0 > c11t0d0p1 ONLINE 0 0 0 > c11t5d0p1 ONLINE 0 0 0 > c11t2d0p1 ONLINE 0 0 0 > c11t3d0p1 ONLINE 0 0 0 > > errors: No known data errors >In general, I would heartily agree with Russ, in that the 8-port LSI-based PCI-E cards are very, very well worth the price. I''m a satisfied user of the Marvell-based PCI-X cards, too (at least, since the 2009.06 release). That all said, I''ve had good experiences with the SiliconImage chips, though that experience has been limited to the PCI/PCI-X versions (3114/3124). They definitely are lower-end, though - I''ve never tried hot-swapping a drive attached to a SilIm controller. --- Since you mentioned it, do you have access to an Athlon II and Phenom II at the same clock rate/core count? Would you be willing to test out a couple of things? I''m trying to determine whether the extra L3 cache on the Phenom makes any real difference for ZFS usage. As previously mentioned, More Cores > Higher Clock rate, at least for NAS usage. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
On Friday 12,March,2010 12:02 PM, Erik Trimble wrote:> > In general, I would heartily agree with Russ, in that the 8-port > LSI-based PCI-E cards are very, very well worth the price. I''m a > satisfied user of the Marvell-based PCI-X cards, too (at least, since > the 2009.06 release). > > That all said, I''ve had good experiences with the SiliconImage chips, > though that experience has been limited to the PCI/PCI-X versions > (3114/3124). They definitely are lower-end, though - I''ve never tried > hot-swapping a drive attached to a SilIm controller.As a user of el-cheapo US$18 SIL3114, I managed to make the system freeze continuously when one of SATA cable got disconnected. I am using 8 disks RAIDZ2 driven by 2 x SIL3114 System is still able to answer the ping, but SSH and console are no longer responsive, obviously also the NFS and CIFS share. The console keep sending "waiting for disk" loop. The only way to recover is to reset the system, and as expected, one of the disk went offline, but the service is back online in degraded ZFS pool. I was using EON 0.59.9 based on snv_129 OTOH, I also never had any "sudden death" hard disk problem. All of my disk failure were either based on the ZFS checksum failure or increasing error counts from the SMART report. Based on that, it is enough to log RMA call to Seagate using RMA code "Hardware RAID making SeaTools test not possible".
Hi, thanks for sharing. Is your LSI card running in IT or IR mode? I had some issues getting all drives connected in IR mode which is the factory default of the LSI branded cards. I am also curious why your controller shows up as "c11". Does anybody know more about the way this is enumerated? I am having two LSI controllers, one is "c10" the other "c11". Why can''t controllers count from 1? Regards, Tonmaus -- This message posted from opensolaris.org
Hi, I suspect mine are already IT mode...not sure how to confirm that though...I have had no issues. My controller is showing as C8...odd isn''t it. It''s in the 16xPCIE slot at the moment...I am not sure how it gets the number... -- This message posted from opensolaris.org
On Mar 11, 2010, at 10:02 PM, Tonmaus wrote:> Hi, > thanks for sharing. > Is your LSI card running in IT or IR mode? I had some issues getting all drives connected in IR mode which is the factory default of the LSI branded cards. > I am also curious why your controller shows up as "c11". Does anybody know more about the way this is enumerated? I am having two LSI controllers, one is "c10" the other "c11". Why can''t controllers count from 1?All of the other potential disk controllers line up ahead of it. For example, you will see controller numbers assigned for your CD, floppy, USB, SD, CF etc. -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance Atlanta, March 16-18, 2010 http://nexenta-atlanta.eventbrite.com Los Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com
Russ Price <rjp_sun <at> fubegra.net> writes:> > I had recently started setting up a homegrown OpenSolaris NAS with > a large RAIDZ2 pool, and had found its RAIDZ2 performance severely > lacking - more like downright atrocious. As originally set up: > > * Asus M4A785-M motherboard > * Phenom II X2 550 Black CPU > * JMB363-based PCIe X1 SATA card (2 ports) > * SII3132-based PCIe X1 SATA card (2 ports) > * Six on-board SATA portsDid you enable AHCI mode on _every_ SATA controller? I have the exact opposite experience with 2 of your 3 types of controllers. I have built various ZFS storage servers with 6-12 drives each, using onboard SB600/SB700 and SiI3132 controllers and have always succeeded in getting outstanding I/O throughput by enabling AHCI mode. For example one of my machines gets 400+MB/s sequential read throughput from a 7-drive raidz pool (2 drives on SiI3132, 1 on SiI3124, 4 on onboard SB700). I have never tested the JMB363 though, so maybe it was the culprit in your setup? -mrb
> On Mar 11, 2010, at 10:02 PM, Tonmaus wrote:> All of the other potential disk controllers line up > ahead of it. For example, > you will see controller numbers assigned for your CD, > floppy, USB, SD, CF etc. > -- richardHi Richard, thanks for the explanation. Actually, I started to worry about controller numbers when I installed LSI cards that were replacing an Areca 1170. The Areca took place 9, and the LSI cards started from 10. Could it be that the BIOS caches configuration data that leads to this? What is btw the proper method to configure white box hardware to achieve more convenient readouts? Regards, Tonmaus -- This message posted from opensolaris.org
> Did you enable AHCI mode on _every_ SATA controller? > > I have the exact opposite experience with 2 of your 3 > types of controllers.It wasn''t possible to do so, and that also made me think that a real HBA would work better. First off, with the AMD SB700/SB800 on-board ports, if I set the last two ports to AHCI mode, the BIOS doesn''t even see drives there, and neither does OpenSolaris; the first four ports work fine in AHCI. The JMicron board came up in AHCI mode; it never, ever presents a BIOS of its own to change configuration. The Silicon Image board (one from SIIG) doesn''t have an AHCI mode in its BIOS. -- This message posted from opensolaris.org
Dedhi Sujatmiko wrote:> As a user of el-cheapo US$18 SIL3114, I managed to make the system > freeze continuously when one of SATA cable got disconnected. I am > using 8 disks RAIDZ2 driven by 2 x SIL3114 > System is still able to answer the ping, but SSH and console are no > longer responsive, obviously also the NFS and CIFS share. The console > keep sending "waiting for disk" loop. > > The only way to recover is to reset the system, and as expected, one > of the disk went offline, but the service is back online in degraded > ZFS pool.The SIL3112/3114 were very early SATA controllers, indeed barely SATA controllers at all by todays standards as I think they always pretend to be PATA to the host system. -- Andrew
Russ Price <rjp_sun <at> fubegra.net> writes:> > > Did you enable AHCI mode on _every_ SATA controller? > > > > I have the exact opposite experience with 2 of your 3 > > types of controllers. > > It wasn''t possible to do so, and that also made me think that a real HBA wouldwork better. First off, with the> AMD SB700/SB800 on-board ports, if I set the last two ports to AHCI mode, theBIOS doesn''t even see drives> there, and neither does OpenSolaris; the first four ports work fine in AHCI.The JMicron board came up in> AHCI mode; it never, ever presents a BIOS of its own to change configuration.The Silicon Image board (one> from SIIG) doesn''t have an AHCI mode in its BIOS.Ok so the lack of AHCI on the onboard SBxxx ports is very likely what was causing your performance issues. Legacy IDE mode is significantly slower. Sounds like you hit bugs on your motherboard BIOS that prevented you from detecting drives while in AHCI mode... (You are right that SiI3132 doesn''t support AHCI, however this is a FIS-based controller with a hardware interface very similar in design to AHCI, so it doese offer great performance out of the box). IMHO the best 2-port PCIe x1 controller is the Marvell 88SE9128, which is AHCI compliant. I like it not because it supports SATA 6.0Gbps, but PCIe 5GT/s. People often believe that a PCIe 2.5GT/s x1 device can do 250MB/s but this is only achievable with a large Max_Payload_Size. In practice MPS is often 128 bytes which limits them to about 60% of the max throughput, or 150MB/s. Given that 2 drives can easily sustain a read throughput of 200-250MB/s, PCIe 5GT/s comes in handy by allowing about 300MB/s with MPS=128 (500MB/s theoretical). -mrb
How does it fare, with regards to BUG ID 689477? http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6894775 //Svein -- This message posted from opensolaris.org
On Sun, Mar 14, 2010 at 4:26 AM, Svein Skogen <svein at stillbilde.net> wrote:> How does it fare, with regards to BUG ID 689477? > > http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6894775 > > //Svein > >It fairs identically, it''s literally the exact same card OEM''d by Intel and sold for less money. Same drivers, same firmware, IIRC, it''s even the same PCI device ID. When I ordered the card, I thought there was a mistake, as the previous poster already mentioned, it comes in a box with an LSI sticker, and the card says LSI all over it. The only place I saw intel was the receipt. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100314/40baf09c/attachment.html>
Thanks for your review! My SiI3114 isn''t recognizing drives in Opensolaris so I''ve been looking for a replacement. This card seems perfect so I ordered one last night. Can anyone recommend a cheap 3 x 5.25 ---> 5 3.5 enclosure I could use with this card? The extra ports necessitate more drives, obviously :) -- This message posted from opensolaris.org
Geoff wrote:> Thanks for your review! My SiI3114 isn''t recognizing drives in Opensolaris so I''ve been looking for a replacement. This card seems perfect so I ordered one last night. Can anyone recommend a cheap 3 x 5.25 ---> 5 3.5 enclosure I could use with this card? The extra ports necessitate more drives, obviously :) >You may need to replace the "RAID" bios with the "IDE" bios, for the Sil3114. http://www.siliconimage.com/support/searchresults.aspx?pid=28&cat=15 Get the flash tool, plus the "IDE BIOS" download, and flash that to your card. It should then work well, and provide OpenSolaris with what it really wants - a JBOD controller, rather than a sort-kinda-fake-Raid controller. That said, the LSI-based HBA really is the thing you want. It''s nice. :-) I''ve moved to 7200RPM 2.5" laptop drives over 3.5" drives, for a combination of reasons: lower-power, better performance than a comparable sized 3.5" drives, and generally lower-capacities meaning resilver times are smaller. They''re a bit more $/GB, but not a lot. If you can stomach the extra cost (they run $220), I''d actually recommend getting a 8x2.5" in 2x5.25" enclosure from Supermicro. It works nicely, plus it gives you a nice little place to put your SSD. :-) http://www.supermicro.com/products/accessories/mobilerack/CSE-M28E1.cfm Other than that, I''ve had good luck with the Venus series for 3.5" Hotswap drives: http://www.centralcomputers.com/commerce/catalog/product.jsp?product_id=59195 http://www.newegg.com/Product/Product.aspx?Item=N82E16817332011 -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
I don''t think you can fit five 3.5" drives in 3 x 5.25", but I have a number of coolermaster 4-in-3 modules, I recommend them: http://www.amazon.com/-/dp/B00129CDGC/ On Sat, Mar 20, 2010 at 20:23, Geoff <geoffakerlund at gmail.com> wrote:> Thanks for your review! My SiI3114 isn''t recognizing drives in Opensolaris > so I''ve been looking for a replacement. This card seems perfect so I > ordered one last night. Can anyone recommend a cheap 3 x 5.25 ---> 5 3.5 > enclosure I could use with this card? The extra ports necessitate more > drives, obviously :) > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100320/a157a72a/attachment.html>
Whoops, Erik''s links show I was wrong about my first point. Though those 5-in-3s are five times as expensive as the 4-in-3. On Sat, Mar 20, 2010 at 22:46, Ethan <notethan at gmail.com> wrote:> I don''t think you can fit five 3.5" drives in 3 x 5.25", but I have a > number of coolermaster 4-in-3 modules, I recommend them: > http://www.amazon.com/-/dp/B00129CDGC/ > > > On Sat, Mar 20, 2010 at 20:23, Geoff <geoffakerlund at gmail.com> wrote: > >> Thanks for your review! My SiI3114 isn''t recognizing drives in >> Opensolaris so I''ve been looking for a replacement. This card seems perfect >> so I ordered one last night. Can anyone recommend a cheap 3 x 5.25 ---> 5 >> 3.5 enclosure I could use with this card? The extra ports necessitate more >> drives, obviously :) >> -- >> This message posted from opensolaris.org >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100320/8618be1c/attachment.html>
Nah, the 8x2.5"-in-2 are $220, while the 5x3.5"-in-3 are $120. You can get 4x3.5"-in-3 for $100, 3x3.5"-in-2 for $80, and even 4x2.5"-in-1 for $65. ( http://www.addonics.com/products/raid_system/ae4rcs25nsa.asp ) The Cool Master thing you linked to isn''t a Hot Swap module. It does 4-in-3, but there''s no backplane. You can''t hot-swap drives put into that sucker. -Erik Ethan wrote:> Whoops, Erik''s links show I was wrong about my first point. Though > those 5-in-3s are five times as expensive as the 4-in-3. > > On Sat, Mar 20, 2010 at 22:46, Ethan <notethan at gmail.com > <mailto:notethan at gmail.com>> wrote: > > I don''t think you can fit five 3.5" drives in 3 x 5.25", but I > have a number of coolermaster 4-in-3 modules, I recommend them: > http://www.amazon.com/-/dp/B00129CDGC/ > > > On Sat, Mar 20, 2010 at 20:23, Geoff <geoffakerlund at gmail.com > <mailto:geoffakerlund at gmail.com>> wrote: > > Thanks for your review! My SiI3114 isn''t recognizing drives > in Opensolaris so I''ve been looking for a replacement. This > card seems perfect so I ordered one last night. Can anyone > recommend a cheap 3 x 5.25 ---> 5 3.5 enclosure I could use > with this card? The extra ports necessitate more drives, > obviously :) > -- > This message posted from opensolaris.org <http://opensolaris.org> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
On Sat, Mar 20, 2010 at 09:50:10PM -0700, Erik Trimble wrote:> Nah, the 8x2.5"-in-2 are $220, while the 5x3.5"-in-3 are $120.And they have a sas expander inside, unlike every other variant of these I''ve seen so far. Cabling mess win. -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100321/cba5eab1/attachment.bin>
> I''ve moved to 7200RPM 2.5" laptop drives over 3.5" > drives, for a > combination of reasons: lower-power, better > performance than a > comparable sized 3.5" drives, and generally > lower-capacities meaning > resilver times are smaller. They''re a bit more $/GB, > but not a lot. > If you can stomach the extra cost (they run $220), > I''d actually > recommend getting a 8x2.5" in 2x5.25" enclosure from > Supermicro. It > works nicely, plus it gives you a nice little place > to put your SSD. :-)> -- > Erik Trimble > Java System Support > Mailstop: usca22-123 > Phone: x17195 > Santa Clara, CARegarding the 2.5" laptop drives, do the inherent error detection properties of ZFS subdue any concerns over a laptop drive''s higher bit error rate or rated MTBF? I''ve been reading about OpenSolaris and ZFS for several months now and am incredibly intrigued, but have yet to implement the solution in my lab. Thanks! -- This message posted from opensolaris.org
Cooper Hubbell wrote:> Regarding the 2.5" laptop drives, do the inherent error detection > properties of ZFS subdue any concerns over a laptop drive''s higher bit > error rate or rated MTBF? I''ve been reading about OpenSolaris and ZFS > for several months now and am incredibly intrigued, but have yet to > implement the solution in my lab. > Thanks! >So far as I know, laptop drives have no higher error rates (i.e. unrecoverable errors per 1 billion bits read/wrote), and similar MTBF to standard consumer SATA drives. Looking at a couple of spec sheets, MTBF is about 600,000 hrs for laptop drives, and 700,000 hrs for consumer 3.5" drives. Frankly, if I was concerned about individual component failures, I''d look outside the consumer space (in all form factors). In both cases, they''re not terribly reliable, which is why ZFS is so great. :-) And, yes, to answer your question, this is (one of) the whole point behind ZFS - being able to provide a reliable service from unreliable parts. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
On 22.03.2010 16:24, Cooper Hubbell wrote:>> I''ve moved to 7200RPM 2.5" laptop drives over 3.5" >> drives, for a >> combination of reasons: lower-power, better >> performance than a >> comparable sized 3.5" drives, and generally >> lower-capacities meaning >> resilver times are smaller. They''re a bit more $/GB, >> but not a lot. >> If you can stomach the extra cost (they run $220), >> I''d actually >> recommend getting a 8x2.5" in 2x5.25" enclosure from >> Supermicro. It >> works nicely, plus it gives you a nice little place >> to put your SSD. :-) > >> -- >> Erik Trimble >> Java System Support >> Mailstop: usca22-123 >> Phone: x17195 >> Santa Clara, CA > > Regarding the 2.5" laptop drives, do the inherent error detection properties of ZFS subdue any concerns over a laptop drive''s higher bit error rate or rated MTBF? I''ve been reading about OpenSolaris and ZFS for several months now and am incredibly intrigued, but have yet to implement the solution in my lab.Well ... the price difference means you can have mirrors of the laptop drives and still save money compared to the "enterprise" ones. With a modern patrol-reading (scrub or hardware raid) array-setup, and with some redundancy, you can re-implement "I" to mean "inexpensive" not "independent" in RAID. ;) //Svein -- Sending mail from a temporary set up workstation, as my primary W500 is off for service. PGP not installed.
Heh. The original definition of "I" was inexpensive. Was never meant to be "independent". Guess that changed by vendors. The idea all along was to take inexpensive hardware and use software to turn it into a reliable system. http://portal.acm.org/citation.cfm?id=50214 http://www.cs.cmu.edu/~garth/RAIDpaper/Patterson88.pdf <snip> Regarding the 2.5" laptop drives, do the inherent error detection properties>> of ZFS subdue any concerns over a laptop drive''s higher bit error rate or >> rated MTBF? I''ve been reading about OpenSolaris and ZFS for several months >> now and am incredibly intrigued, but have yet to implement the solution in >> my lab. >> > > Well ... the price difference means you can have mirrors of the laptop > drives and still save money compared to the "enterprise" ones. With a modern > patrol-reading (scrub or hardware raid) array-setup, and with some > redundancy, you can re-implement "I" to mean "inexpensive" not "independent" > in RAID. ;) > > > //Svein > > -- >-- "You can choose your friends, you can choose the deals." - Equity Private "If Linux is faster, it''s a Solaris bug." - Phil Harman Blog - http://whatderass.blogspot.com/ Twitter - @khyron4eva -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100323/31e12c3c/attachment.html>
> * SII3132-based PCIe X1 SATA card (2 ports)This chip is slow. PCIe cards based on the Silicon Image 3124 are much faster, peaking around 1 GB/sec aggregate throughput. However, the 3124 is a PCI-X chip and hence is used behind an Intel PCI serial-to-parallel bridge for PCIe applications: this make for a more expensive card than a 3132. All PCIe 3124 cards I have seen present all four 3124 ports as external eSATA ports. Perhaps someone else has seen a PCIe 3124 with internal SATA connectors? -- This message posted from opensolaris.org
On Sun, Mar 28 at 16:55, James Van Artsdalen wrote:>> * SII3132-based PCIe X1 SATA card (2 ports) > > This chip is slow. > > PCIe cards based on the Silicon Image 3124 are much faster, peaking > around 1 GB/sec aggregate throughput. However, the 3124 is a PCI-X > chip and hence is used behind an Intel PCI serial-to-parallel bridge > for PCIe applications: this make for a more expensive card than a > 3132. > > All PCIe 3124 cards I have seen present all four 3124 ports as > external eSATA ports. Perhaps someone else has seen a PCIe 3124 > with internal SATA connectors?The 3124 was one of the first NCQ-capable chips on the market, and there are definitely internal versions of it around somewhere. While they''re typically mounted on PCI-X boards, the original reference designs worked just fine in PCI slots. As to the 3132, it''s probably limited by the single bitlane. I think there''s a 3134 variant that is PCI-e x4 which should be a lot faster. Doesn''t matter for rotating drives, but for SSDs it''s important. --eric -- Eric D. Mudama edmudama at mail.bounceswoosh.org