Hi, I''m trying to find out which controller card people here recommend that can drive 8 SATA hard drives and that would work with my Asus M2N-SLI Deluxe motherboard, which has following expansion slots: 2 x PCI Express x16 slot at x16, x8 speed (PCIe) The main requirements I have are: - drive 8 SATA drives - rock solid reliability with x86 OpenSolaris 2009.06 or SXCE - easy to identify failed drives and replace them (hot swap is not necessary but a bonus if supported) - I must be able to move disks with data from one controller to another of different brands (and back!), only doing zpool export and import, which implies the HBA must be able to run in JBOD-mode without storing or modify anything on the disks. And preferably, the drives must show up with the format command. - should support staggered spinup of drives preferably>From limited research I can see that at least the following 3 main possibilities exist:1. Supermicro AOC-SAT2-MV8 (PCI-X interface) (pure SATA) (~$100) 2. Supermicro AOC-USAS-L8i / AOC-USASLP-L8i (PCIe interface) (miniSAS to SATA cables) (~$100) 3. LSI SAS 3081E-R or other LSI ''MegaRAID'' cards (PCIe interface) (miniSAS to SATA cables) (~$200+) 1. AOC-SAT2-MV8 : Again, from reading a bit, I can see that although the M2N-SLI Deluxe motherboard does not have a PCI-X slot, apparently it could take the AOC-SAT2-MV8 card in one of the PCIe slots, although the card would only run in 32-bit mode, instead of 64-bit mode, and would therefore run slower. 2. AOC-USAS-L8i : The AOC-USAS-L8i card looks possible too, again running in the PCIe slot, but the old threads I saw on this seem to talk about some device numbering issue which could make determining the right failed drive to pull out, a difficult task -- see here for more details: http://www.opensolaris.org/jive/thread.jspa?messageID=271751 http://www.opensolaris.org/jive/thread.jspa?threadID=46982&tstart=90 3. LSI SAS 3081E-R or other LSI ''MegaRAID'' cards : http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3081er/index.html?remote=1&locale http://www.newegg.com/Product/Product.aspx?Item=N82E16816118100 This forum thread from DEC 2007 doesn''t sound too good regarding drive numbering (for identifying failed drives etc), but the thread is 18 months old, and perhaps the issues may have been resolved now? Also I noticed an extra ''-R'' in the model number I found, but this might be an omission of the original forum poster -- see here: http://www.opensolaris.org/jive/thread.jspa?threadID=46982&tstart=90 I saw Ben Rockwood saying good things about the LSI MegaRAID cards, although the model he references supports only 4 internal and 4 external drives so is not what I want -- see here: http://opensolaris.org/jive/message.jspa?messageID=368445#368445 Perhaps there are better LSI MegaRAID cards that people know of and can recommend? Preferably not too expensive though, as it''s for a home system :) If anyone can throw some light on these topics, I would be pleased to hear from you. Thanks a lot. Simon http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/ -- This message posted from opensolaris.org
dick hoogendijk
2009-Jun-21 15:12 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
On Sun, 21 Jun 2009 06:35:50 PDT Simon Breden <no-reply at opensolaris.org> wrote:> If anyone can throw some light on these topics, I would be pleased to > hear from you. Thanks a lot.I follow this thread with much interest. Curious to see what''ll come out of it. -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | nevada / OpenSolaris 2009.06 release + All that''s really worth doing is what we do for others (Lewis Carrol)
After checking some more sources, it seems that if I used the AOC-SAT2-MV8 with this motherboard, I would need to run it on the standard PCI slot. Here is the full listing of the motherboard''s expansion slots: 2 x PCI Express x16 slot at x16, x8 speed 2 x PCI Express x1 3 x PCI 2.2 <<----------- this one -- This message posted from opensolaris.org
I use the AOC-SAT2-MV8 in a ordinary PCI slot. The PCI slot maxes at 150MB/sec or so. That is the fastest you will get. That card works very good with Solaris/OpenSolaris. Detects automatically, etc. Ive heard though that it does not work with hot swapping discs - avoid this. However, In a PCI-X it will max at 1GB/sec. I have been thinkin about buying a server mobo (they have PCI-X) to get 1GB/sec. Or should I buy a PCIe card instead? I dont know. -- This message posted from opensolaris.org
just a side-question:>I fol****this thread with much interest.what are these "*****" for ? why is "followed" turned into "fol*****" on this board? -- This message posted from opensolaris.org
dick hoogendijk
2009-Jun-21 21:10 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
On Sun, 21 Jun 2009 14:07:49 PDT roland <no-reply at opensolaris.org> wrote:> just a side-question: > > >I fol****this thread with much interest. > > what are these "*****" for ? > > why is "followed" turned into "fol*****" on this board?The text of my original message was: On Sun, 21 Jun 2009 06:35:50 PDT Simon Breden <no-reply at opensolaris.org> wrote:> If anyone can throw some light on these topics, I would be pleased to > hear from you. Thanks a lot.I follow this thread with much interest. Curious to see what''ll come out of it. Does the change occur again? -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | nevada / OpenSolaris 2009.06 release + All that''s really worth doing is what we do for others (Lewis Carrol)
Hey Kebabber, long time no hear! :) It''s great to hear that you''ve had good experiences with the card. It''s a great pity to have throughput drop from a potential 1GB/s to 150MB/s, but as most of my use of the NAS is across the network, and not local intra-NAS transfers, this should not be a problem. Of course, with a single GbE connection speeds are normally limited to around 50MB/s or so anyway... Tell me, have you had any drives fail and had to figure out how to identify the failed drive and replace it & resilver using the AOC-SAT2-MV8, or have you tried any experiments to test resilvering ? I''m just curious as to how easy it is to do this with this controller card. Like yourself, I was toying with the idea of upgrading and buying a shiny new mobo with dual 64-bit PCI-X slots and socket LGA1366 for Xeon 5500 series (Nehalem) processors -- the SuperMicro X8SAX here: http://www.supermicro.com/products/motherboard/Xeon3000/X58/X8SAX.cfm Then I added up the price of all the components and decided to try to make do with the existing kit and just do an upgrade. So I narrowed down possible SATA controllers to the above choices and I''m interested in people''s experiences of using these controllers to help me decide. Seems like the cheapest and simplest choice will be the AOC-SAT2-MV8, and I just take a hit on the reduced speed -- but that won''t be a big problem. However, as I have 2 x PCIe x16 slots available, if the AOC-USAS-L8i is reliable and doesn''t have issues now with identifying drive ids, and supports JBOD mode, then it looks like the better choice. It is uses the more modern PCI Express (PCIe) interface, rather than the ageing PCI-X interface, fine as I''m sure it is. Simon -- This message posted from opensolaris.org
On Mon 22/06/09 02:07 , roland no-reply at opensolaris.org sent:> just a side-question: > > >I fol****this thread with much > interest. > what are these "*****" for ? > > why is "followed" turned into "fol*****" on this > board?It isn''t a board, it''s a mail list. All the forum does is bugger up the formatiing and threading of emails! -- Ian
Carson Gaspar
2009-Jun-22 02:01 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
I''ll chime in as a happy owner of the LSI SAS 3081E-R PCI-E board. It works just fine. You need to get "lsiutil" from the LSI web site to fully access all the functionality, and they cleverly hide the download link only under their FC HBAs on their support site, even though it works for everything. As for identifying disks, you can just use lsiutil: root:gandalf 0 # lsiutil -p 1 42 LSI Logic MPT Configuration Utility, Version 1.62, January 14, 2009 1 MPT Port found Port Name Chip Vendor/Type/Rev MPT Rev Firmware Rev IOC 1. mpt0 LSI Logic SAS1068E B3 105 011a0000 0 mpt0 is /dev/cfg/c6 B___T___L Type Operating System Device Name 0 0 0 Disk /dev/rdsk/c6t0d0s2 0 1 0 Disk /dev/rdsk/c6t1d0s2 0 2 0 Disk /dev/rdsk/c6t2d0s2 0 3 0 Disk /dev/rdsk/c6t3d0s2 0 4 0 Disk /dev/rdsk/c6t4d0s2 0 5 0 Disk /dev/rdsk/c6t5d0s2 0 6 0 Disk /dev/rdsk/c6t6d0s2 0 7 0 Disk /dev/rdsk/c6t7d0s2
Andre van Eyssen
2009-Jun-22 02:05 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
On Sun, 21 Jun 2009, Carson Gaspar wrote:> I''ll chime in as a happy owner of the LSI SAS 3081E-R PCI-E board. It works > just fine. You need to get "lsiutil" from the LSI web site to fully access > all the functionality, and they cleverly hide the download link only under > their FC HBAs on their support site, even though it works for everything.I''ll add another vote for the LSI products. I have a four port PCI-X card in my V880, and the performance is good and the product is well behaved. The only caveats: 1. Make sure you upgrade the firmware ASAP 2. You may need to use lsiutil to fiddle the target mappings Andre. -- Andre van Eyssen. mail: andre at purplecow.org jabber: andre at interact.purplecow.org purplecow.org: UNIX for the masses http://www2.purplecow.org purplecow.org: PCOWpix http://pix.purplecow.org
James C. McPherson
2009-Jun-22 02:14 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
On Sun, 21 Jun 2009 19:01:31 -0700 Carson Gaspar <carson at taltos.org> wrote:> I''ll chime in as a happy owner of the LSI SAS 3081E-R PCI-E board. It > works just fine. You need to get "lsiutil" from the LSI web site to > fully access all the functionality, and they cleverly hide the download > link only under their FC HBAs on their support site, even though it > works for everything.As a member of the team which works on mpt(7d), I''m disappointed that\ you believe you need to use lsiutil to "fully access all the functionality" of the board. What gaps have you found in mpt(7d) and the standard OpenSolaris tools that lsiutil fixes for you? What is the "full functionality" that you believe is missing?> As for identifying disks, you can just use lsiutil:... or use cfgadm(1m) which has had this ability for many years.> root:gandalf 0 # lsiutil -p 1 42 > > LSI Logic MPT Configuration Utility, Version 1.62, January 14, 2009 > > 1 MPT Port found > > Port Name Chip Vendor/Type/Rev MPT Rev Firmware Rev IOC > 1. mpt0 LSI Logic SAS1068E B3 105 011a0000 0 > > mpt0 is /dev/cfg/c6 > > B___T___L Type Operating System Device Name > 0 0 0 Disk /dev/rdsk/c6t0d0s2 > 0 1 0 Disk /dev/rdsk/c6t1d0s2 > 0 2 0 Disk /dev/rdsk/c6t2d0s2 > 0 3 0 Disk /dev/rdsk/c6t3d0s2 > 0 4 0 Disk /dev/rdsk/c6t4d0s2 > 0 5 0 Disk /dev/rdsk/c6t5d0s2 > 0 6 0 Disk /dev/rdsk/c6t6d0s2 > 0 7 0 Disk /dev/rdsk/c6t7d0s2You can get that information from use of cfgadm(1m). James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog Kernel Conference Australia - http://au.sun.com/sunnews/events/2009/kernel
Jorgen Lundman
2009-Jun-22 02:15 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
I only have a 32bit PCI bus in the Intel Atom 330 board, so I have no choice than to be "slower", but I can confirm that the Supermicro dac-sata-mv8 (SATA-1) card works just fine, and does display in cfgadm. (Hot-swapping is possible). I have been told aoc-sat2-mv8 does as well (SATA-II) but I have not personally tried it. Lund -- Jorgen Lundman | <lundman at lundman.net> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) Japan | +81 (0)3 -3375-1767 (home)
Jorgen Lundman wrote:> > I only have a 32bit PCI bus in the Intel Atom 330 board, so I have no > choice than to be "slower", but I can confirm that the Supermicro > dac-sata-mv8 (SATA-1) card works just fine, and does display in > cfgadm. (Hot-swapping is possible). > > I have been told aoc-sat2-mv8 does as well (SATA-II) but I have not > personally tried it. > > Lund >I have an AOC-SAT2-MV8 in an older Opteron-based system. It''s a 2-socket, Opteron 252 system with 8GB of RAM, and PCI-X slots. It''s one of the newer AOC cards with the latest Marvell chipset, and it works like a champ - very well, smooth, and I don''t see any problems. Simple, out-of-the-box installation and works with no tinkering at all (with OS 2008.11 and 2009.05). That said, there''s a couple of things you want to be aware of about the AOC: (1) it uses normal sata cables. This is really nice in terms of availability (you can get any length you want easily at any computer store), but it''s a bit messy compared to the nice multi-lane ones. (2) It''s a PCI-X card, and will run at 133Mhz. I have a second gigabit ethernet card in my motherboard, which limits the two PCI-X cards to 100Mhz. The down side is that with 8 drives and 2 gigabit ethernet interfaces, it''s not hard to flood the PCI-X bus (which can pump 100Mhz x 64bit = 6400 Mbps max, but about 50% of that under real usage) (3) as a PCI-X card, it''s a "two-third''s" length, low-profile card. It will fit in any PCI-X slot you have. However, if you are trying to put it in a 32-bit PCI slot, be aware that it extends about 2 inches (50mm) beyond the back of the PCI slot. Make sure you have the proper clearances for such a card. Also, it''s a 3.3v card (won''t work in 5v slots). None of this should be a problem in any modern motherboard/case setup, only in really old stuff. (4) All the SATA connectors are on the end of the card, which means you''ll need _at least_ another 1 inch (25mm) clearance to plug the cables in. As much as I like the card, these days, I''d chose the LSI PCI-E model. The PCI-E bus is just superior to PCI-X - you get much less bus contention which means it''s easier to get full throughput from each card One more thing: I''ve found that the newest MLC-based SSDs with the newer "barefoot" SATA controller and 64MB or more of cache are more than suitable for use as Read cache, and they actually do OK as write-cache, too. Particularly for small business server machine (those that have have 8-12 data drives, total). And, these days, there''s nice little funky dual-2.5 drives in a floppy form-factor things. http://www.addonics.com/products/mobile_rack/doubledrive.asp Example new SSD for Readzilla/Logzilla : http://www.ocztechnology.com/products/flash_drives/ocz_summit_series_sata_ii_2_5-ssd -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
Eric D. Mudama
2009-Jun-22 04:51 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
On Mon, Jun 22 at 12:05, Andre van Eyssen wrote:> I''ll add another vote for the LSI products. I have a four port PCI-X card > in my V880, and the performance is good and the product is well behaved. > The only caveats: > > 1. Make sure you upgrade the firmware ASAP > 2. You may need to use lsiutil to fiddle the target mappingsWe bought a Dell T610 as a fileserver, and it comes with an LSI 1068E based board (PERC6/i SAS). Worked out of the box, no special drivers or anything to install, everything autodetected just fine. Hotplug works great too, I''ve yanked drives (Came with WD RE3 SASA devices) while the box was under load without issues, took ~5 seconds to timeout the device and give me full interactivity at the console. They then show right back up when hot plugged back in, and I can resilver without problems. --eric -- Eric D. Mudama edmudama at mail.bounceswoosh.org
Jorgen Lundman
2009-Jun-22 04:52 UTC
[zfs-discuss] PicoLCD Was: Best controller card for 8 SATA drives ?
I hesitate to post this question here, since the relation to ZFS is tenuous at best (zfs to sata controller to LCD panel). But maybe someone has already been down this path before me. Looking at building a RAID, with osol and zfs, I naturally want a front-panel. I was looking at something like; http://www.mini-box.com/picoLCD-256x64-Sideshow-CDROM-Bay Since it appears to come with OpenSource drivers. Based on lcd4linux, which I can compile with marginal massaging. Has anyone run this successfully with osol? It appears to handle mrtg directly, so I should be able to graph a whole load of ZFS data. Has someone already been down this road too? -- Jorgen Lundman | <lundman at lundman.net> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) Japan | +81 (0)3 -3375-1767 (home)
Carson Gaspar
2009-Jun-22 08:25 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
James C. McPherson wrote:> On Sun, 21 Jun 2009 19:01:31 -0700 > > As a member of the team which works on mpt(7d), I''m disappointed that\ > you believe you need to use lsiutil to "fully access all the functionality" > of the board. > > What gaps have you found in mpt(7d) and the standard OpenSolaris > tools that lsiutil fixes for you? > > What is the "full functionality" that you believe is missing?How does one upgrade firmware without using lsiutil? Or toggle controller LEDs to identify which board is which, or... Feel free to read the lsiutil docs (bad though they are) - the PDF is available from the LSI web site. Although both lsiutil and hd produce errors from mpt when trying to get SMART data (specifically "Command failed with IOCStatus = 0045 (Data Underrun)"). I haven''t tried using the LSI provided drivers.>> As for identifying disks, you can just use lsiutil: > > ... or use cfgadm(1m) which has had this ability for many years.Great. Please provide a sample command line. Because the man page is completely useless (no, really - try reading it). And no invocation _I_ can find provides the same information. I can only assume it''s one of the "hardware specific" options, which are documented nowhere that I can find. Note that my comments all releate to Solaris 10 u7 - it''s certainly possible that things are better in OpenSolaris. -- Carson
James C. McPherson
2009-Jun-22 11:26 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
On Mon, 22 Jun 2009 01:25:54 -0700 Carson Gaspar <carson at taltos.org> wrote:> James C. McPherson wrote: > > On Sun, 21 Jun 2009 19:01:31 -0700 > > > > As a member of the team which works on mpt(7d), I''m disappointed that\ > > you believe you need to use lsiutil to "fully access all the functionality" > > of the board. > > > > What gaps have you found in mpt(7d) and the standard OpenSolaris > > tools that lsiutil fixes for you? > > > > What is the "full functionality" that you believe is missing? > > How does one upgrade firmware without using lsiutil?Use raidctl(1m). For fwflash(1m), this is on the "future project" list purely because we''ve got much higher priority projects on the boil - if we couldn''t use raidctl(1m) this would be higher up the list.> Or toggle controller LEDs to identify which board is which, or... > Feel free to read the lsiutil docs (bad though they are) - the PDF is > available from the LSI web site.LED stuff ... yeah, not such an easy thing to solve. I believe there has been a fair amount of effort gone into the generic FMA topology "parser" so that we can do this, but I do not know the status of the project.> Although both lsiutil and hd produce errors from mpt when trying to get > SMART data (specifically "Command failed with IOCStatus = 0045 (Data > Underrun)"). I haven''t tried using the LSI provided drivers.Is "hd" a utility from the x4500 software suite?> >> As for identifying disks, you can just use lsiutil: > > > > ... or use cfgadm(1m) which has had this ability for many years. > > Great. Please provide a sample command line. Because the man page is > completely useless (no, really - try reading it). And no invocation _I_ > can find provides the same information. I can only assume it''s one of > the "hardware specific" options, which are documented nowhere that I can > find.Did you try "cfgadm -lav" ? I was under the impression that the cfgadm(1m) manpage''s examples section was sufficient to provide at least a starting point for a usable command line. If you don''t believe that is the case, I''d appreciate you filing a bug against it (yes, we do like to get doc/manpage bugs) so that we can make the manpage better.> Note that my comments all releate to Solaris 10 u7 - it''s certainly > possible that things are better in OpenSolaris.$ cfgadm -alv c0 c3 Ap_Id Receptacle Occupant Condition Information When Type Busy Phys_Id c0 connected configured unknown unavailable scsi-bus n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi c0::dsk/c0t4d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c0t4d0 c0::dsk/c0t5d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c0t5d0 c0::dsk/c0t6d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c0t6d0 c0::dsk/c0t7d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c0t7d0 c3 connected configured unknown unavailable scsi-bus n /devices/pci at 79,0/pci10de,376 at a/pci1000,3150 at 0:scsi c3::dsk/c3t3d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at 79,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c3t3d0 c3::dsk/c3t5d0 connected configured unknown SAMSUNG HD321KJ unavailable disk n /devices/pci at 79,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c3t5d0 c3::dsk/c3t6d0 connected configured unknown WDC WD3200AAKS-00VYA0 unavailable disk n /devices/pci at 79,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c3t6d0 c3::dsk/c3t7d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at 79,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c3t7d0 That functionality has been in Solaris 10 since FCS. The manpage for cfgadm(1m) indicates that it was last changed in October 2004, which is a good 6 months prior to FCS of Solaris 10. If you don''t like it, and don''t tell us, how are we supposed to know that it needs improving? James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog Kernel Conference Australia - http://au.sun.com/sunnews/events/2009/kernel
>>>>> "edm" == Eric D Mudama <edmudama at bounceswoosh.org> writes:edm> We bought a Dell T610 as a fileserver, and it comes with an edm> LSI 1068E based board (PERC6/i SAS). which driver attaches to it? pciids.sourceforge.net says this is a 1078 board, not a 1068 board. please, be careful. There''s too much confusion about these cards. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090622/4f2dbf66/attachment.bin>
Thanks guys, keep your experiences coming. -- This message posted from opensolaris.org
Also, is anybody using the AOC-USAS-L8i? If so, what''s your experience of it, and identifying drives and replacing failed drives with it? -- This message posted from opensolaris.org
Carson Gaspar
2009-Jun-22 22:28 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
James C. McPherson wrote:> Use raidctl(1m). For fwflash(1m), this is on the "future project" > list purely because we''ve got much higher priority projects on the > boil - if we couldn''t use raidctl(1m) this would be higher up the > list.Nice to see that raidctl can do that. Although I don''t see a way to flash the BIOS or fcode with raidctl... am I missing something, is it a doc bug, or is it not possible? The man page intro mentions BIOS and fcode, but the only option I can see is ''-F'' and it just says firmware...>> Although both lsiutil and hd produce errors from mpt when trying to get >> SMART data (specifically "Command failed with IOCStatus = 0045 (Data >> Underrun)"). I haven''t tried using the LSI provided drivers. > > Is "hd" a utility from the x4500 software suite?Yes. It''s the only Sun provided tool I know of that will dump detailed SMART info (even on non-X45x0 hosts).> Did you try "cfgadm -lav" ? I was under the impression that the > cfgadm(1m) manpage''s examples section was sufficient to provide > at least a starting point for a usable command line. > > If you don''t believe that is the case, I''d appreciate you filing > a bug against it (yes, we do like to get doc/manpage bugs) so that > we can make the manpage better....> $ cfgadm -alv c0 c3 > Ap_Id Receptacle Occupant Condition Information > When Type Busy Phys_Id > c0 connected configured unknown > unavailable scsi-bus n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi > c0::dsk/c0t4d0 connected configured unknown ST3320620AS ST3320620AS > unavailable disk n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c0t4d0That gives the same data as ''ls -l /dev/dsk/c0t4d0''. It does _not_ give the LSI HBA port number. And given the plethora of device mapping options in the LSI controller, you really want the real port numbers. As for the man page, for a basic "give me a list of devices" the man page is overly complex and verbose, but sufficient. It''s all the _other_ options that are documented to exist, but without any specifics. It all basically reads as "reserved for future use". -- Carson
Eric D. Mudama
2009-Jun-23 02:33 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
On Mon, Jun 22 at 15:46, Miles Nordin wrote:>>>>>> "edm" == Eric D Mudama <edmudama at bounceswoosh.org> writes: > > edm> We bought a Dell T610 as a fileserver, and it comes with an > edm> LSI 1068E based board (PERC6/i SAS). > >which driver attaches to it? > >pciids.sourceforge.net says this is a 1078 board, not a 1068 board. > >please, be careful. There''s too much confusion about these cards.Sorry, that may have been confusing. We have the cheapest storage option on the T610, with no onboard cache. I guess it''s called the "Dell SAS6i/R" while they reserve the PERC name for the ones with cache. I had understood that they were basically identical except for the cache, but maybe not. Anyway, this adapter has worked great for us so far. snippet of prtconf -D: i86pc (driver name: rootnex) pci, instance #0 (driver name: npe) pci8086,3411, instance #6 (driver name: pcie_pci) pci1028,1f10, instance #0 (driver name: mpt) sd, instance #1 (driver name: sd) sd, instance #6 (driver name: sd) sd, instance #7 (driver name: sd) sd, instance #2 (driver name: sd) sd, instance #4 (driver name: sd) sd, instance #5 (driver name: sd) For this board the mpt driver is being used, and here''s the prtconf -pv info: Node 0x00001f assigned-addresses: 81020010.00000000.0000fc00.00000000.00000100.83020014.00000000.df2ec000.00000000.00004000.8302001c.00000000.df2f0000.00000000.00010000 reg: 00020000.00000000.00000000.00000000.00000000.01020010.00000000.00000000.00000000.00000100.03020014.00000000.00000000.00000000.00004000.0302001c.00000000.00000000.00000000.00010000 compatible: ''pciex1000,58.1028.1f10.8'' + ''pciex1000,58.1028.1f10'' + ''pciex1000,58.8'' + ''pciex1000,58'' + ''pciexclass,010000'' + ''pciexclass,0100'' + ''pci1000,58.1028.1f10.8'' + ''pci1000,58.1028.1f10'' + ''pci1028,1f10'' + ''pci1000,58.8'' + ''pci1000,58'' + ''pciclass,010000'' + ''pciclass,0100'' model: ''SCSI bus controller'' power-consumption: 00000001.00000001 devsel-speed: 00000000 interrupts: 00000001 subsystem-vendor-id: 00001028 subsystem-id: 00001f10 unit-address: ''0'' class-code: 00010000 revision-id: 00000008 vendor-id: 00001000 device-id: 00000058 pcie-capid-pointer: 00000068 pcie-capid-reg: 00000001 name: ''pci1028,1f10'' --eric -- Eric D. Mudama edmudama at mail.bounceswoosh.org
Just a side note on the PERC labelled cards: they don''t have a JBOD mode so you _have_ to use hardware RAID. This may or may not be an issue in your configuration but it does mean that moving disks between controllers is no longer possible. The only way to do a pseudo JBOD is to create broken RAID 1 volumes which is not ideal. Cordialement, Erik Ableson +33.6.80.83.58.28 Envoy? depuis mon iPhone On 23 juin 2009, at 04:33, "Eric D. Mudama" <edmudama at bounceswoosh.org> wrote:> On Mon, Jun 22 at 15:46, Miles Nordin wrote: >>>>>>> "edm" == Eric D Mudama <edmudama at bounceswoosh.org> writes: >> >> edm> We bought a Dell T610 as a fileserver, and it comes with an >> edm> LSI 1068E based board (PERC6/i SAS). >> >> which driver attaches to it? >> >> pciids.sourceforge.net says this is a 1078 board, not a 1068 board. >> >> please, be careful. There''s too much confusion about these cards. > > Sorry, that may have been confusing. We have the cheapest storage > option on the T610, with no onboard cache. I guess it''s called the > "Dell SAS6i/R" while they reserve the PERC name for the ones with > cache. I had understood that they were basically identical except for > the cache, but maybe not. > > Anyway, this adapter has worked great for us so far. > > > snippet of prtconf -D: > > > i86pc (driver name: rootnex) > pci, instance #0 (driver name: npe) > pci8086,3411, instance #6 (driver name: pcie_pci) > pci1028,1f10, instance #0 (driver name: mpt) > sd, instance #1 (driver name: sd) > sd, instance #6 (driver name: sd) > sd, instance #7 (driver name: sd) > sd, instance #2 (driver name: sd) > sd, instance #4 (driver name: sd) > sd, instance #5 (driver name: sd) > > > For this board the mpt driver is being used, and here''s the prtconf > -pv info: > > > Node 0x00001f > assigned-addresses: > 81020010.00000000.0000fc00.00000000.00000100.83020014.00000000. > df2ec000.00000000.00004000.8302001c. > 00000000.df2f0000.00000000.00010000 > reg: > 00020000.00000000.00000000.00000000.00000000.01020010.00000000.00000000.00000000.00000100.03020014.00000000.00000000.00000000.00004000.0302001c. > 00000000.00000000.00000000.00010000 > compatible: ''pciex1000,58.1028.1f10.8'' + ''pciex1000,58.1028.1f10'' > + ''pciex1000,58.8'' + ''pciex1000,58'' + ''pciexclass,010000'' + > ''pciexclass,0100'' + ''pci1000,58.1028.1f10.8'' + > ''pci1000,58.1028.1f10'' + ''pci1028,1f10'' + ''pci1000,58.8'' + > ''pci1000,58'' + ''pciclass,010000'' + ''pciclass,0100'' > model: ''SCSI bus controller'' > power-consumption: 00000001.00000001 > devsel-speed: 00000000 > interrupts: 00000001 > subsystem-vendor-id: 00001028 > subsystem-id: 00001f10 > unit-address: ''0'' > class-code: 00010000 > revision-id: 00000008 > vendor-id: 00001000 > device-id: 00000058 > pcie-capid-pointer: 00000068 > pcie-capid-reg: 00000001 > name: ''pci1028,1f10'' > > > --eric > > > -- > Eric D. Mudama > edmudama at mail.bounceswoosh.org > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Kyle McDonald
2009-Jun-23 15:49 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
Erik Ableson wrote:> > Just a side note on the PERC labelled cards: they don''t have a JBOD > mode so you _have_ to use hardware RAID. This may or may not be an > issue in your configuration but it does mean that moving disks between > controllers is no longer possible. The only way to do a pseudo JBOD is > to create broken RAID 1 volumes which is not ideal. >It won''t even let you make single drive RAID 0 LUNs? That''s a shame. The lack of portability is disappointing. The trade-off though is battery backed cache if the card supports it. -Kyle> > Cordialement, > > Erik Ableson > > +33.6.80.83.58.28 > Envoy? depuis mon iPhone > > On 23 juin 2009, at 04:33, "Eric D. Mudama" > <edmudama at bounceswoosh.org> wrote: > > > On Mon, Jun 22 at 15:46, Miles Nordin wrote: > >>>>>>> "edm" == Eric D Mudama <edmudama at bounceswoosh.org> writes: > >> > >> edm> We bought a Dell T610 as a fileserver, and it comes with an > >> edm> LSI 1068E based board (PERC6/i SAS). > >> > >> which driver attaches to it? > >> > >> pciids.sourceforge.net says this is a 1078 board, not a 1068 board. > >> > >> please, be careful. There''s too much confusion about these cards. > > > > Sorry, that may have been confusing. We have the cheapest storage > > option on the T610, with no onboard cache. I guess it''s called the > > "Dell SAS6i/R" while they reserve the PERC name for the ones with > > cache. I had understood that they were basically identical except for > > the cache, but maybe not. > > > > Anyway, this adapter has worked great for us so far. > > > > > > snippet of prtconf -D: > > > > > > i86pc (driver name: rootnex) > > pci, instance #0 (driver name: npe) > > pci8086,3411, instance #6 (driver name: pcie_pci) > > pci1028,1f10, instance #0 (driver name: mpt) > > sd, instance #1 (driver name: sd) > > sd, instance #6 (driver name: sd) > > sd, instance #7 (driver name: sd) > > sd, instance #2 (driver name: sd) > > sd, instance #4 (driver name: sd) > > sd, instance #5 (driver name: sd) > > > > > > For this board the mpt driver is being used, and here''s the prtconf > > -pv info: > > > > > > Node 0x00001f > > assigned-addresses: > > 81020010.00000000.0000fc00.00000000.00000100.83020014.00000000. > > df2ec000.00000000.00004000.8302001c. > > 00000000.df2f0000.00000000.00010000 > > reg: > > > 00020000.00000000.00000000.00000000.00000000.01020010.00000000.00000000.00000000.00000100.03020014.00000000.00000000.00000000.00004000.0302001c. > > 00000000.00000000.00000000.00010000 > > compatible: ''pciex1000,58.1028.1f10.8'' + ''pciex1000,58.1028.1f10'' > > + ''pciex1000,58.8'' + ''pciex1000,58'' + ''pciexclass,010000'' + > > ''pciexclass,0100'' + ''pci1000,58.1028.1f10.8'' + > > ''pci1000,58.1028.1f10'' + ''pci1028,1f10'' + ''pci1000,58.8'' + > > ''pci1000,58'' + ''pciclass,010000'' + ''pciclass,0100'' > > model: ''SCSI bus controller'' > > power-consumption: 00000001.00000001 > > devsel-speed: 00000000 > > interrupts: 00000001 > > subsystem-vendor-id: 00001028 > > subsystem-id: 00001f10 > > unit-address: ''0'' > > class-code: 00010000 > > revision-id: 00000008 > > vendor-id: 00001000 > > device-id: 00000058 > > pcie-capid-pointer: 00000068 > > pcie-capid-reg: 00000001 > > name: ''pci1028,1f10'' > > > > > > --eric > > > > > > -- > > Eric D. Mudama > > edmudama at mail.bounceswoosh.org > > > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
>>>>> "ave" == Andre van Eyssen <andre at purplecow.org> writes: >>>>> "et" == Erik Trimble <Erik.Trimble at Sun.COM> writes: >>>>> "ea" == Erik Ableson <eableson at mac.com> writes: >>>>> "edm" == "Eric D. Mudama" <edmudama at bounceswoosh.org> writes:ave> The LSI SAS controllers with SATA ports work nicely with ave> SPARC. I think what you mean is ``some LSI SAS controllers work nicely with SPARC''''. It would help if you tell exactly which one you''re using. I thought the LSI 1068 do not work with SPARC (mfi driver, x86 only). I thought the 1078 are supposed to work with SPARC (mega_sas). ave> Oh, and if you do grab the LSI card, don''t let James catch you ave> using the itmpt driver or lsiutils ;-) What''s the itmpt driver? closed-source sparc driver? Does it work with 1068-based cards? http://www.lsi.com/DistributionSystem/AssetDocument/itmpt_sparc_5.07.04.txt edm> We bought a Dell T610 as a fileserver, and it comes with an edm> LSI 1068E based board (PERC6/i SAS). carton> pciids.sourceforge.net says this is a 1078 board, not a 1068 carton> board. edm> it''s called the "Dell SAS6i/R" while they reserve the PERC name edm> for the ones with cache. I had understood that they were edm> basically identical except for the cache, but maybe not. edm> (driver name: pcie_pci) edm> pci1028,1f10, instance #0 (driver name: mpt) ah, I am guessing not identical. mpt = 1068. I found this in pciids: 1000 LSI Logic / Symbios Logic 0058 SAS1068E PCI-Express Fusion-MPT SAS 1028 1f10 SAS 6/iR Integrated RAID Controller pciids.sourceforge.net doesn''t always specify 1068 vs 1078 but in this case it does. so, AIUI, this card will not work on SPARC. but maybe with this other proprietary driver itmpt it will? ea> Just a side note on the PERC labelled cards: they don''t have a ea> JBOD mode so you _have_ to use hardware RAID. This problem is not true of my AOC-USAS-L8i (1068) with the proprietary mpt driver---it uses unlabeled disks. so, I bet it''s not true of the dell no-battery-nvram cards either. so....possibly, the Dell PERC cards with a battery/cache will work in SPARC. Has anyone tried? also: et> I have an AOC-SAT2-MV8 in an older Opteron-based system. It''s et> Also, it''s a 3.3v card (won''t work in 5v slots). None et> of this should be a problem in any modern motherboard/case et> setup, only in really old stuff. first, I think that''s not true because I have that card working in a 5V 32-bit slot. second, if you really did have a 3.3V-only card, a modern system would not make it magically okay. It would be a serious inconvenience. Slots are either 5V or 3.3V, not both. There''s such thing as a dual-voltage card, but there is no such thing as a dual-voltage slot, and most 32-bit slots are 5V (they have the key farther from the external-connector face of the card). I think the reason there cannot be a dual-voltage slot is that the slot''s on a bus shared with other cards, so all cards on the same bus must agree on the same voltage. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090623/afc72e74/attachment.bin>
The problem I had was with the single raid 0 volumes (miswrote RAID 1 on the original message) This is not a straight to disk connection and you''ll have problems if you ever need to move disks around or move them to another controller. I agree that the MD1000 with ZFS is a rocking, inexpensive setup (we have several!) but I''d recommend using a SAS card with a true JBOD mode for maximum flexibility and portability. If I remember correctly, I think we''re using the Adaptec 3085. I''ve pulled 465MB/s write and 1GB/s read off the MD1000 filled with SATA drives. Cordialement, Erik Ableson +33.6.80.83.58.28 Envoy? depuis mon iPhone On 23 juin 2009, at 21:18, Henrik Johansen <henrik at scannet.dk> wrote:> Kyle McDonald wrote: >> Erik Ableson wrote: >>> >>> Just a side note on the PERC labelled cards: they don''t have a >>> JBOD mode so you _have_ to use hardware RAID. This may or may not >>> be an issue in your configuration but it does mean that moving >>> disks between controllers is no longer possible. The only way to >>> do a pseudo JBOD is to create broken RAID 1 volumes which is not >>> ideal. >>> >> It won''t even let you make single drive RAID 0 LUNs? That''s a shame. > > We currently have 90+ disks that are created as single drive RAID 0 > LUNs > on several PERC 6/E (LSI 1078E chipset) controllers and used by ZFS. > > I can assure you that they work without any problems and perform very > well indeed. > > In fact, the combination of PERC 6/E and MD1000 disk arrays has worked > so well for us that we are going to double the number of disks during > this fall. > >> The lack of portability is disappointing. The trade-off though is >> battery backed cache if the card supports it. >> >> -Kyle >> >>> >>> Cordialement, >>> >>> Erik Ableson >>> >>> +33.6.80.83.58.28 >>> Envoy? depuis mon iPhone >>> >>> On 23 juin 2009, at 04:33, "Eric D. Mudama" <edmudama at bounceswoosh.org >>> > wrote: >>> >>> > On Mon, Jun 22 at 15:46, Miles Nordin wrote: >>> >>>>>>> "edm" == Eric D Mudama <edmudama at bounceswoosh.org> writes: >>> >> >>> >> edm> We bought a Dell T610 as a fileserver, and it comes with an >>> >> edm> LSI 1068E based board (PERC6/i SAS). >>> >> >>> >> which driver attaches to it? >>> >> >>> >> pciids.sourceforge.net says this is a 1078 board, not a 1068 >>> board. >>> >> >>> >> please, be careful. There''s too much confusion about these >>> cards. >>> > >>> > Sorry, that may have been confusing. We have the cheapest storage >>> > option on the T610, with no onboard cache. I guess it''s called >>> the >>> > "Dell SAS6i/R" while they reserve the PERC name for the ones with >>> > cache. I had understood that they were basically identical >>> except for >>> > the cache, but maybe not. >>> > >>> > Anyway, this adapter has worked great for us so far. >>> > >>> > >>> > snippet of prtconf -D: >>> > >>> > >>> > i86pc (driver name: rootnex) >>> > pci, instance #0 (driver name: npe) >>> > pci8086,3411, instance #6 (driver name: pcie_pci) >>> > pci1028,1f10, instance #0 (driver name: mpt) >>> > sd, instance #1 (driver name: sd) >>> > sd, instance #6 (driver name: sd) >>> > sd, instance #7 (driver name: sd) >>> > sd, instance #2 (driver name: sd) >>> > sd, instance #4 (driver name: sd) >>> > sd, instance #5 (driver name: sd) >>> > >>> > >>> > For this board the mpt driver is being used, and here''s the >>> prtconf >>> > -pv info: >>> > >>> > >>> > Node 0x00001f >>> > assigned-addresses: > >>> 81020010.00000000.0000fc00.00000000.00000100.83020014.00000000. >>> > df2ec000.00000000.00004000.8302001c. >>> > 00000000.df2f0000.00000000.00010000 >>> > reg: > >>> 00020000.00000000.00000000.00000000.00000000.01020010.00000000.00000000.00000000.00000100.03020014.00000000.00000000.00000000.00004000.0302001c. >>> > 00000000.00000000.00000000.00010000 >>> > compatible: ''pciex1000,58.1028.1f10.8'' + >>> ''pciex1000,58.1028.1f10'' > + ''pciex1000,58.8'' + ''pciex1000,58'' + >>> ''pciexclass,010000'' + > ''pciexclass,0100'' + >>> ''pci1000,58.1028.1f10.8'' + > ''pci1000,58.1028.1f10'' + >>> ''pci1028,1f10'' + ''pci1000,58.8'' + > ''pci1000,58'' + ''pciclass, >>> 010000'' + ''pciclass,0100'' >>> > model: ''SCSI bus controller'' >>> > power-consumption: 00000001.00000001 >>> > devsel-speed: 00000000 >>> > interrupts: 00000001 >>> > subsystem-vendor-id: 00001028 >>> > subsystem-id: 00001f10 >>> > unit-address: ''0'' >>> > class-code: 00010000 >>> > revision-id: 00000008 >>> > vendor-id: 00001000 >>> > device-id: 00000058 >>> > pcie-capid-pointer: 00000068 >>> > pcie-capid-reg: 00000001 >>> > name: ''pci1028,1f10'' >>> > >>> > >>> > --eric >>> > >>> > >>> > -- >>> > Eric D. Mudama >>> > edmudama at mail.bounceswoosh.org >>> > >>> > _______________________________________________ >>> > zfs-discuss mailing list >>> > zfs-discuss at opensolaris.org >>> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > -- > Med venlig hilsen / Best Regards > > Henrik Johansen > henrik at scannet.dk > Tlf. 75 53 35 00 > > ScanNet Group > A/S ScanNet
> I thought the LSI 1068 do not work with SPARC (mfi driver, x86 only). > I thought the 1078 are supposed to work with SPARC (mega_sas).Hmmm.... <shelob:/home/volker,23204> uname -a SunOS shelob 5.10 Generic_137111-02 sun4v sparc SUNW,Sun-Fire-T1000 <shelob:/home/volker,23205> man mpt Devices mpt(7D) NAME mpt - SCSI host bus adapter driver SYNOPSIS scsi at unit-address DESCRIPTION The mpt host bus adapter driver is a SCSA compliant nexus driver that supports the LSI 53C1030 SCSI, SAS1064, SAS1068 and Dell SAS 6i/R controllers. ... :-) -- ------------------------------------------------------------------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: vab at bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgr??e: 45 Gesch?ftsf?hrer: Rainer J. H. Brandt und Volker A. Brandt
Miles Nordin wrote:>>>>>> "ave" == Andre van Eyssen <andre at purplecow.org> writes: >>>>>> "et" == Erik Trimble <Erik.Trimble at Sun.COM> writes: >>>>>> "ea" == Erik Ableson <eableson at mac.com> writes: >>>>>> "edm" == "Eric D. Mudama" <edmudama at bounceswoosh.org> writes: >>>>>> > > ave> The LSI SAS controllers with SATA ports work nicely with > ave> SPARC. > > I think what you mean is ``some LSI SAS controllers work nicely with > SPARC''''. It would help if you tell exactly which one you''re using. > > I thought the LSI 1068 do not work with SPARC (mfi driver, x86 only). >Sun has been using the LSI 1068[E] and its cousin, 1064[E] in SPARC machines for many years. In fact, I can''t think of a SPARC machine in the current product line that does not use either 1068 or 1064 (I''m sure someone will correct me, though ;-) -- richard
Henrik Johansen
2009-Jun-23 20:34 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
Erik Ableson wrote:>The problem I had was with the single raid 0 volumes (miswrote RAID 1 >on the original message) > >This is not a straight to disk connection and you''ll have problems if >you ever need to move disks around or move them to another controller.Would you mind explaining exactly what issues or problems you had ? I have moved disks around several controllers without problems. You must remember however to create the RAID 0 lun throught LSI''s megaraid CLI tool and / or to clear any foreign config before the controller will expose the disk(s) to the OS. The only real problem that I can think of is that you cannot use the autoreplace functionality of recent ZFS versions with these controllers.>I agree that the MD1000 with ZFS is a rocking, inexpensive setup (we >have several!) but I''d recommend using a SAS card with a true JBOD >mode for maximum flexibility and portability. If I remember correctly, >I think we''re using the Adaptec 3085. I''ve pulled 465MB/s write and >1GB/s read off the MD1000 filled with SATA drives. > >Cordialement, > >Erik Ableson > >+33.6.80.83.58.28 >Envoy? depuis mon iPhone > >On 23 juin 2009, at 21:18, Henrik Johansen <henrik at scannet.dk> wrote: > >> Kyle McDonald wrote: >>> Erik Ableson wrote: >>>> >>>> Just a side note on the PERC labelled cards: they don''t have a >>>> JBOD mode so you _have_ to use hardware RAID. This may or may not >>>> be an issue in your configuration but it does mean that moving >>>> disks between controllers is no longer possible. The only way to >>>> do a pseudo JBOD is to create broken RAID 1 volumes which is not >>>> ideal. >>>> >>> It won''t even let you make single drive RAID 0 LUNs? That''s a shame. >> >> We currently have 90+ disks that are created as single drive RAID 0 >> LUNs >> on several PERC 6/E (LSI 1078E chipset) controllers and used by ZFS. >> >> I can assure you that they work without any problems and perform very >> well indeed. >> >> In fact, the combination of PERC 6/E and MD1000 disk arrays has worked >> so well for us that we are going to double the number of disks during >> this fall. >> >>> The lack of portability is disappointing. The trade-off though is >>> battery backed cache if the card supports it. >>> >>> -Kyle >>> >>>> >>>> Cordialement, >>>> >>>> Erik Ableson >>>> >>>> +33.6.80.83.58.28 >>>> Envoy? depuis mon iPhone >>>> >>>> On 23 juin 2009, at 04:33, "Eric D. Mudama" <edmudama at bounceswoosh.org >>>> > wrote: >>>> >>>> > On Mon, Jun 22 at 15:46, Miles Nordin wrote: >>>> >>>>>>> "edm" == Eric D Mudama <edmudama at bounceswoosh.org> writes: >>>> >> >>>> >> edm> We bought a Dell T610 as a fileserver, and it comes with an >>>> >> edm> LSI 1068E based board (PERC6/i SAS). >>>> >> >>>> >> which driver attaches to it? >>>> >> >>>> >> pciids.sourceforge.net says this is a 1078 board, not a 1068 >>>> board. >>>> >> >>>> >> please, be careful. There''s too much confusion about these >>>> cards. >>>> > >>>> > Sorry, that may have been confusing. We have the cheapest storage >>>> > option on the T610, with no onboard cache. I guess it''s called >>>> the >>>> > "Dell SAS6i/R" while they reserve the PERC name for the ones with >>>> > cache. I had understood that they were basically identical >>>> except for >>>> > the cache, but maybe not. >>>> > >>>> > Anyway, this adapter has worked great for us so far. >>>> > >>>> > >>>> > snippet of prtconf -D: >>>> > >>>> > >>>> > i86pc (driver name: rootnex) >>>> > pci, instance #0 (driver name: npe) >>>> > pci8086,3411, instance #6 (driver name: pcie_pci) >>>> > pci1028,1f10, instance #0 (driver name: mpt) >>>> > sd, instance #1 (driver name: sd) >>>> > sd, instance #6 (driver name: sd) >>>> > sd, instance #7 (driver name: sd) >>>> > sd, instance #2 (driver name: sd) >>>> > sd, instance #4 (driver name: sd) >>>> > sd, instance #5 (driver name: sd) >>>> > >>>> > >>>> > For this board the mpt driver is being used, and here''s the >>>> prtconf >>>> > -pv info: >>>> > >>>> > >>>> > Node 0x00001f >>>> > assigned-addresses: > >>>> 81020010.00000000.0000fc00.00000000.00000100.83020014.00000000. >>>> > df2ec000.00000000.00004000.8302001c. >>>> > 00000000.df2f0000.00000000.00010000 >>>> > reg: > >>>> 00020000.00000000.00000000.00000000.00000000.01020010.00000000.00000000.00000000.00000100.03020014.00000000.00000000.00000000.00004000.0302001c. >>>> > 00000000.00000000.00000000.00010000 >>>> > compatible: ''pciex1000,58.1028.1f10.8'' + >>>> ''pciex1000,58.1028.1f10'' > + ''pciex1000,58.8'' + ''pciex1000,58'' + >>>> ''pciexclass,010000'' + > ''pciexclass,0100'' + >>>> ''pci1000,58.1028.1f10.8'' + > ''pci1000,58.1028.1f10'' + >>>> ''pci1028,1f10'' + ''pci1000,58.8'' + > ''pci1000,58'' + ''pciclass, >>>> 010000'' + ''pciclass,0100'' >>>> > model: ''SCSI bus controller'' >>>> > power-consumption: 00000001.00000001 >>>> > devsel-speed: 00000000 >>>> > interrupts: 00000001 >>>> > subsystem-vendor-id: 00001028 >>>> > subsystem-id: 00001f10 >>>> > unit-address: ''0'' >>>> > class-code: 00010000 >>>> > revision-id: 00000008 >>>> > vendor-id: 00001000 >>>> > device-id: 00000058 >>>> > pcie-capid-pointer: 00000068 >>>> > pcie-capid-reg: 00000001 >>>> > name: ''pci1028,1f10'' >>>> > >>>> > >>>> > --eric >>>> > >>>> > >>>> > -- >>>> > Eric D. Mudama >>>> > edmudama at mail.bounceswoosh.org >>>> > >>>> > _______________________________________________ >>>> > zfs-discuss mailing list >>>> > zfs-discuss at opensolaris.org >>>> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>>> _______________________________________________ >>>> zfs-discuss mailing list >>>> zfs-discuss at opensolaris.org >>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>>> >>> >>> >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> -- >> Med venlig hilsen / Best Regards >> >> Henrik Johansen >> henrik at scannet.dk >> Tlf. 75 53 35 00 >> >> ScanNet Group >> A/S ScanNet-- Med venlig hilsen / Best Regards Henrik Johansen henrik at scannet.dk Tlf. 75 53 35 00 ScanNet Group A/S ScanNet
James C. McPherson
2009-Jun-23 22:37 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
On Mon, 22 Jun 2009 15:28:08 -0700 Carson Gaspar <carson at taltos.org> wrote:> James C. McPherson wrote: > > > Use raidctl(1m). For fwflash(1m), this is on the "future project" > > list purely because we''ve got much higher priority projects on the > > boil - if we couldn''t use raidctl(1m) this would be higher up the > > list. > > Nice to see that raidctl can do that. Although I don''t see a way to > flash the BIOS or fcode with raidctl... am I missing something, is it a > doc bug, or is it not possible? The man page intro mentions BIOS and > fcode, but the only option I can see is ''-F'' and it just says firmware...We include both bios and fcode in the definition of firmware. The manpage for raidctl(1m) also gives an example: Example 4 Updating Flash Images on the Controller The following command updates flash images on the controller 0: # raidctl -F lsi_image.fw 0 ...> > Did you try "cfgadm -lav" ? I was under the impression that the > > cfgadm(1m) manpage''s examples section was sufficient to provide > > at least a starting point for a usable command line. > > > > If you don''t believe that is the case, I''d appreciate you filing > > a bug against it (yes, we do like to get doc/manpage bugs) so that > > we can make the manpage better. > ... > > $ cfgadm -alv c0 c3 > > Ap_Id Receptacle Occupant Condition Information > > When Type Busy Phys_Id > > c0 connected configured unknown > > unavailable scsi-bus n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi > > c0::dsk/c0t4d0 connected configured unknown ST3320620AS ST3320620AS > > unavailable disk n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c0t4d0 > > That gives the same data as ''ls -l /dev/dsk/c0t4d0''. It does _not_ give > the LSI HBA port number. And given the plethora of device mapping > options in the LSI controller, you really want the real port numbers.I don''t see why that makes a difference to you, and I''d be grateful if you''d clue me in on that. I only know of two device mapping options for the 1064/1068-based cards, which are "logical target id" and "SAS WWN". We use the "logical target id" method with mpt(7d).> As for the man page, for a basic "give me a list of devices" the man > page is overly complex and verbose, but sufficient. It''s all the _other_ > options that are documented to exist, but without any specifics. It all > basically reads as "reserved for future use".There are other manpages referred to in the SEE ALSO section of the cfgadm(1m) manpage, just like with other manpages: SEE ALSO cfgadm_fp(1M), cfgadm_ib(1M), cfgadm_pci(1M),cfgadm_sbd(1M), cfgadm_scsi(1M), cfgadm_usb(1M), ifconfig(1M), mount(1M), prtdiag(1M), psradm(1M), syslogd(1M), config_admin(3CFGADM), getopt(3C), getsubopt(3C), isatty(3C), attributes(5), environ(5) What else are you thinking of as "reserved for future use" ? James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog Kernel Conference Australia - http://au.sun.com/sunnews/events/2009/kernel
>>>>> "vab" == Volker A Brandt <vab at bb-c.de> writes:>> I thought the LSI 1068 do not work with SPARC (mfi driver, x86 >> only). I thought the 1078 are supposed to work with SPARC >> (mega_sas). vab> <shelob:/home/volker,23204> uname -a SunOS shelob 5.10 vab> Generic_137111-02 sun4v sparc SUNW,Sun-Fire-T1000 vab> <shelob:/home/volker,23205> man mpt [...] vab> DESCRIPTION The mpt host bus adapter driver is a SCSA vab> compliant nexus driver that supports the LSI 53C1030 SCSI, vab> SAS1064, SAS1068 and Dell SAS 6i/R controllers. ... damnit. I guess I got it backwards. mega_sas and 1078 are x86-only, need lsiutil/MegaCli/whatever blob and use LSI-labeled disks that don''t move between controllers easily, but driver comes with source. mpt and 1068 are x86/sparc, use plain moveable disks not LSI RAID0''s, but have a proprietary driver. I''m sorry---I''m only making it worse. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090623/aedace95/attachment.bin>
Richard Elling wrote:> Miles Nordin wrote: >>>>>>> "ave" == Andre van Eyssen <andre at purplecow.org> writes: >>>>>>> "et" == Erik Trimble <Erik.Trimble at Sun.COM> writes: >>>>>>> "ea" == Erik Ableson <eableson at mac.com> writes: >>>>>>> "edm" == "Eric D. Mudama" <edmudama at bounceswoosh.org> writes: >>>>>>> >> >> ave> The LSI SAS controllers with SATA ports work nicely with >> ave> SPARC. >> >> I think what you mean is ``some LSI SAS controllers work nicely with >> SPARC''''. It would help if you tell exactly which one you''re using. >> >> I thought the LSI 1068 do not work with SPARC (mfi driver, x86 only). >> > > Sun has been using the LSI 1068[E] and its cousin, 1064[E] in > SPARC machines for many years. In fact, I can''t think of a > SPARC machine in the current product line that does not use > either 1068 or 1064 (I''m sure someone will correct me, though ;-) > -- richardMight be worth having a look at the T1000 to see what''s in there. We used to ship those with SATA drives in. cheers, --justin
I think this is the board that shipped in the original T2000 machines before they began putting the sas/sata onboard: LSISAS3080X-R Can anyone verify this? Justin Stringfellow wrote:> Richard Elling wrote: >> Miles Nordin wrote: >>>>>>>> "ave" == Andre van Eyssen <andre at purplecow.org> writes: >>>>>>>> "et" == Erik Trimble <Erik.Trimble at Sun.COM> writes: >>>>>>>> "ea" == Erik Ableson <eableson at mac.com> writes: >>>>>>>> "edm" == "Eric D. Mudama" <edmudama at bounceswoosh.org> writes: >>>>>>>> >>> >>> ave> The LSI SAS controllers with SATA ports work nicely with >>> ave> SPARC. >>> >>> I think what you mean is ``some LSI SAS controllers work nicely with >>> SPARC''''. It would help if you tell exactly which one you''re using. >>> >>> I thought the LSI 1068 do not work with SPARC (mfi driver, x86 only). >>> >> >> Sun has been using the LSI 1068[E] and its cousin, 1064[E] in >> SPARC machines for many years. In fact, I can''t think of a >> SPARC machine in the current product line that does not use >> either 1068 or 1064 (I''m sure someone will correct me, though ;-) >> -- richard > > Might be worth having a look at the T1000 to see what''s in there. We > used to ship those with SATA drives in. > > cheers, > --justin > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>>>> "jr" == Jacob Ritorto <Jacob.Ritorto at GMail.COM> writes:jr> I think this is the board that shipped in the original jr> T2000 machines before they began putting the sas/sata onboard: jr> LSISAS3080X-R jr> Can anyone verify this? can''t verify but FWIW i fucked it up: >>>> I thought the LSI 1068 do not work with SPARC (mfi driver, >>>> x86 only). ^^^^^ me. this is wrong. mega_sas, the open source driver for 1078/PERC, is x86-only. http://mail.opensolaris.org/pipermail/zfs-discuss/2009-March/027338.html and mpt is the 1068 driver, proprietary, works on x86 and SPARC. mfi is some other (abandoned?) random third-party open-source driver for some of these cards that no one''s mentioned using yet, at https://svn.itee.uq.edu.au/repo/mfi/ then there is also itmpt, the third-party-downloadable closed-source driver from LSI Logic, dunno much about it but someone here used it. sorry. There''s also been talk of two tools, MegaCli and lsiutil, which are both binary only and exist for both Linux and Solaris, and I think are used only with the 1078 cards but maybe not. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090624/452903f5/attachment.bin>
Hey sbreden! :o) No, I havent tried to tinker with my drives. They have been functioning all the time. I suspect (can not remember) that each SATA slot in the card has a number attached to it? Can anyone confirm this? If I am right, OpenSolaris will say something about "disc 6 is broken" and on the card there is a number 6? Then you can identify the disc? I thought of exchanging my PCI card with a PCIe card variant instead to reach higher speeds. PCI-X is legacy. The problem with PCIe cards is that soon SSD drives will be common. A ZFS raid with SSD would need maybe PCIe x 16 or so, to reach max band width. The PCIe cards are all PCIe x 4 or something of today. I need a PCIe x 16 card to make it future proof for the SSD discs. Maybe the best bet would be to attach a SSD disc directly to a PCIe slot, to reach max transfer speed? Or wait for SATA 3? I dont know. I want to wait until SSD raids are tested out. Then I will buy an apropriate card capable of SSD raids. Maybe SSD discs should never be used in conjunction with a card, and always connect directly to the SATA port? Until I know more on this, my PCI card will be fine. 150MB/sec is ok for my personal needs. (My ZFS raid is connected to my Desktop PC. I dont have a server that is on 24/7 using power. I want to save power. Save the earth! :o) All my 5 ZFS raid discs are connected to one Molex. That molex has a power switch. So I just turn on the ZFS raid and copy all files I need to my system disc (which is 500GB) and then immediately reboot and turn off the ZFS raid. This way I only have one disc active, which I use as a cache. When my data are ready, I copy them to the ZFS raid and then shut down the power to the ZFS raid discs.) However I have a question. Which speed will I get with this solution. I have 2 SSD discs in a PCI slot => 150MB/sec. Now I add 1 SSD disc into a SATA slot and another SSD disc into another SATA slot. Then I have 5 disc in PCI => 150MB/sec 1 disc in SATA => 300MB/sec (I assume SATA reach 300MB/sec?) 1 disc in SATA => 300MB/sec. I connect all the 7 discs into one ZFS raid. Which speed will I get? Will I get 150 + 300 + 300MB/sec? Or will the PCI slot strangle the SATA ports? Or will the fastest speed "win" and I will only get 300MB/sec? -- This message posted from opensolaris.org
Bob Friesenhahn
2009-Jun-24 20:38 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
On Wed, 24 Jun 2009, Orvar Korvar wrote:> I thought of exchanging my PCI card with a PCIe card variant instead > to reach higher speeds. PCI-X is legacy. The problem with PCIe cards > is that soon SSD drives will be common. A ZFS raid with SSD would > need maybe PCIe x 16 or so, to reach max band width. The PCIe cards > are all PCIe x 4 or something of today. I need a PCIe x 16 card to > make it future proof for the SSD discs. Maybe the best bet would be > to attach a SSD disc directly to a PCIe slot, to reach max transfer > speed? Or wait for SATA 3? I dont know. I want to wait until SSDI don''t think this is valid thinking because it assumes that write rates for SSDs are higher than for traditional hard drives. This assumption is not often correct. Maybe someday. SSDs offer much lower write latencies (no head seek!) but their bulk sequential data transfer properties are not yet better than hard drives. The main purpose for using SSDs with ZFS is to reduce latencies for synchronous writes required by network file service and databases. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Eric D. Mudama
2009-Jun-24 22:28 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
On Wed, Jun 24 at 15:38, Bob Friesenhahn wrote:> On Wed, 24 Jun 2009, Orvar Korvar wrote: > >> I thought of exchanging my PCI card with a PCIe card variant instead >> to reach higher speeds. PCI-X is legacy. The problem with PCIe cards >> is that soon SSD drives will be common. A ZFS raid with SSD would need >> maybe PCIe x 16 or so, to reach max band width. The PCIe cards are all >> PCIe x 4 or something of today. I need a PCIe x 16 card to make it >> future proof for the SSD discs. Maybe the best bet would be to attach a >> SSD disc directly to a PCIe slot, to reach max transfer speed? Or wait >> for SATA 3? I dont know. I want to wait until SSD > > I don''t think this is valid thinking because it assumes that write rates > for SSDs are higher than for traditional hard drives. This assumption is > not often correct. Maybe someday. > > SSDs offer much lower write latencies (no head seek!) but their bulk > sequential data transfer properties are not yet better than hard drives. > > The main purpose for using SSDs with ZFS is to reduce latencies for > synchronous writes required by network file service and databases.In the "available 5 months ago" category, the Intel X25-E will write sequentially at ~170MB/s according to the datasheets. That is faster than most, if not all rotating media today. --eric -- Eric D. Mudama edmudama at mail.bounceswoosh.org
Bob Friesenhahn
2009-Jun-24 23:43 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
On Wed, 24 Jun 2009, Eric D. Mudama wrote:>> >> The main purpose for using SSDs with ZFS is to reduce latencies for >> synchronous writes required by network file service and databases. > > In the "available 5 months ago" category, the Intel X25-E will write > sequentially at ~170MB/s according to the datasheets. That is faster > than most, if not all rotating media today.Sounds good. Is that is after the whole device has been re-written a few times or just when you first use it? How many of these devices do you own and use? Seagate Cheetah drives can now support a sustained data rate of 204MB/second. That is with 600GB capacity rather than 64GB and at a similar price point (i.e. 10X less cost per GB). Or you can just RAID-0 a few cheaper rotating rust drives and achieve a huge sequential data rate. I see that the Intel X25-E claims a sequential read performance of 250 MB/s. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On 25/06/2009, at 5:16 AM, Miles Nordin wrote:> and mpt is the 1068 driver, proprietary, works on x86 and SPARC.> then there is also itmpt, the third-party-downloadable closed-source > driver from LSI Logic, dunno much about it but someone here used it.I''m confused. Why do you say the mpt driver is proprietary and the LSI provided tool is closed source? I thought they were both closed source and that the LSI chipset specifications were proprietary.
Miles Nordin wrote:> There''s also been talk of two tools, MegaCli and lsiutil, which are > both binary only and exist for both Linux and Solaris, and I think are > used only with the 1078 cards but maybe not.lsiutil works with LSI chips that use the Fusion-MPT interface (SCSI, SAS, and FC), including the 1068. I''ve used it with both the mpt and itmpt driver. MegaCLI appears to be for MegaRAID SAS and SATA II controllers (using the mega_sas driver), including the 1078. I''ve never used it. -- Carson
Eric D. Mudama
2009-Jun-25 16:11 UTC
[zfs-discuss] Best controller card for 8 SATA drives ?
On Wed, Jun 24 at 18:43, Bob Friesenhahn wrote:> On Wed, 24 Jun 2009, Eric D. Mudama wrote: >>> >>> The main purpose for using SSDs with ZFS is to reduce latencies for >>> synchronous writes required by network file service and databases. >> >> In the "available 5 months ago" category, the Intel X25-E will write >> sequentially at ~170MB/s according to the datasheets. That is faster >> than most, if not all rotating media today. > > Sounds good. Is that is after the whole device has been re-written a > few times or just when you first use it?Based on the various review sites, some tests experience a temporary performance decrease when performing sequential IO over the top of previously randomly written data, which resolves in some short time period. I am not convinced that simply writing the devices makes them slower. Actual performance will be workload specific, YMMV.> How many of these devices do you own and use?I own two of them personally, and work with many every day.> Seagate Cheetah drives can now support a sustained data rate of > 204MB/second. That is with 600GB capacity rather than 64GB and at a > similar price point (i.e. 10X less cost per GB). Or you can just > RAID-0 a few cheaper rotating rust drives and achieve a huge > sequential data rate.True. In $ per sequential GB/s, rotating rust still wins by far. However, your comment about all flash being slower than rotating at sequential writes was mistaken. Even at 10x the price, if you''re working with a dataset that needs random IO, the $ per IOP from flash can be significantly greater than any amount of rust, and typically with much lower power consumption to boot. Obviously the primary benefits of SSDs aren''t in sequential reads/writes, but they''re not necessarilly complete dogs there either. --eric -- Eric D. Mudama edmudama at mail.bounceswoosh.org
>>>>> "jl" == James Lever <j at jamver.id.au> writes:jl> I thought they were both closed source yes, both are closed source / proprietary. If you are really confused and not just trying to pick a dictionary fight, I can start saying ``closed source / proprietary'''' on Solaris lists from now on. On Linux lists, ``proprietary'''' is clear enough, but maybe the people around here are different. jl> and that the LSI chipset specifications were proprietary. <shrug> I don''t know about specifications, but I do know that Linux has an open source driver for 1068, and Solaris has an open source driver for 1078. Getting source without specifications is a problem, though, yes, if you want to track down a bug in the driver or write a driver for another OS. The other problem is, with both chips but especially with the 1078, it soudns like these cards are very ``firmware'''' heavy, and the firmware is proprietary. This causes the complaints here that ''hd'' (smartctl equivalent) doesn''t work. And that with PERC/1078 they have to make RAID0''s of each disk with LSI labels on the disk which blocks moving the disk from one controller to another---meaning a broken controller could potentially toast your whole zpool no matter what disk redundancy you had, unless you figure out some way to escape the trap. If not for the ``closed-source / proprietary'''' firmware, these two problems could never persist. so, there is still no SATA driver for Solaris that: * is open-source. like a fully-open stack, not just ``here look! here is some source. is that a rabbit over there?'''' open-source meaning I can add smartctl or DVD writer or NCQ support without bumping into some strange blob that stops me. open-source meaning I can swap out a disk without having to run any proprietary code to ``bless'''' the disk first. no BIOS bluescreen garbage either. * supports NCQ and hotplug * performs well and doesn''t have a lot of bugs, like ``freezes'''' and so on * works on x86 and SPARC * comes in card form so it can achieve high port density on Linux, both Marvell and LSI 1068 driver come close to or meet all these. (smartctl DOES work with Linux''s open source 1068 driver.) Sun has more leverage with LSI than Linux not less because they are an actual customer of LSI''s chips for the hardware they sell---even ditched Marvell for LSI!---yet they do worse on driver openness negotiation and then try to blame LSI''s whim, and tell random scmuck user to ``go complain to LSI'''' when we are not LSI''s customer, Sun is. The issue gets more complicated, but not better, IMHO. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090625/ced921fe/attachment.bin>
The situation regarding lack of open source drivers for these LSI 1068/1078-based cards is quite scary. And did I understand you correctly when you say that these LSI 1068/1078 drivers write labels to drives, meaning you can''t move drives from an LSI controlled array to another arbitrary array due to these labels? If this is the case then surely my best bet would be to go for the non-LSI controllers -- e.g. the AOC-SAT2-MV8 instead, which I presume does not write labels to the array drives? Please correct me if I have misunderstood. Cheers, Simon -- This message posted from opensolaris.org
>>>>> "sb" == Simon Breden <no-reply at opensolaris.org> writes:sb> The situation regarding lack of open source drivers for these sb> LSI 1068/1078-based cards is quite scary. meh I dunno. The amount of confusion is a little scary, I guess. sb> And did I understand you correctly when you say that these LSI sb> 1068/1078 drivers write labels to drives, no incorrect. I''m using a 1068 (``closed-source / proprietary driver''''), and it doesn''t write such labels. The firmware piece is big, so not all 1068 are necessarily the same: I think some are capable of RAID0/RAID1. but so far I''ve not heard of a 1068 demanding LSI labels, and mine doesn''t. The LSI 1078 (PERC) with the open-source x86-only driver is the one with the big ``closed-source / proprietary'''' firmware blob running on the card itself. Others have reported this blob demands LSI labels on the disks. I don''t have one. who knows, maybe you can cross-flash some weird firmware from some strange variant of card that doesn''t need LSI labels on each disk, or maybe some binary blob config tool will flip a magic undocumented switch inside the card to make it JBOD-able. I don''t like to deal in such circus-hoop messes unless someone else can do the work and tell me exactly how. sb> go for the non-LSI controllers -- e.g. the AOC-SAT2-MV8 no, you misunderstood because there are two kinds of LSI card with two different drivers. compared to Marvell, LSI 1068 has a cheaper bus (PCIe), performs better, and seems to have fewer bugs (ex. 6787312 is duplicate of a secret Marvell bug), and its proprietary driver includes a SPARC object. The Marvell controller is still ``closed-source / proprietary'''' driver (Linux driver for the same chip: open source), so you gain nothing there. The one thing Marvell might gain you is, it''s SATA framework, so smartctl/hd may be closer to working. On Linux both cards use their uniform SCSI framework so smartctl works. I have both AOC-SAT2-MV8 and AOC-USAS-L8i and suggest the latter. You have to unscrew teh reverse-polarity card-edge bracket and buy some octopus cables from thenerds.net or adaptec or similar, is all. AOC-USAS-L8i works with these cables among others: http://www.thenerds.net/3WARE.AMCC_Serial_Attached_SCSI_SAS_Internal_Cable.CBLSFF8087OCF10M.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090625/9de6a676/attachment.bin>
On Fri, Jun 26, 2009 at 4:11 AM, Eric D. Mudama <edmudama at bounceswoosh.org>wrote:> True. In $ per sequential GB/s, rotating rust still wins by far. > However, your comment about all flash being slower than rotating at > sequential writes was mistaken. Even at 10x the price, if you''re > working with a dataset that needs random IO, the $ per IOP from flash > can be significantly greater than any amount of rust, and typically > with much lower power consumption to boot. > > Obviously the primary benefits of SSDs aren''t in sequential > reads/writes, but they''re not necessarilly complete dogs there either. >It''s all about iops. HDD can do about 300 iops, SSD can get up to 10k+ iops. On sequential writes obviously low iops is not a problem - 300 x 128kB is 40MB. But for small packet random sync NFS traffic 300 * 32kb is hardly a 1MB/s. Nicholas -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090626/e7448fc7/attachment.html>
Miles Nordin wrote:>>>>>> "sb" == Simon Breden <no-reply at opensolaris.org> writes: >>>>>> > > sb> The situation regarding lack of open source drivers for these > sb> LSI 1068/1078-based cards is quite scary. > > meh I dunno. The amount of confusion is a little scary, I guess. > > sb> And did I understand you correctly when you say that these LSI > sb> 1068/1078 drivers write labels to drives, > > no incorrect. I''m using a 1068 (``closed-source / proprietary > driver''''), and it doesn''t write such labels. >I think the confusion is because the 1068 can do "hardware" RAID, it can and does write its own labels, as well as reserve space for replacements of disks with slightly different sizes. But that is only one mode of operation. Nit: the definition of "proprietary" is "relating to ownership." One could argue that Linus still "owns" Linux since he has such strong control over what is accepted in the Linux kernel :-) Similarly, one could argue that a forker would own the fork. In other words, "open source" and "proprietary" are not mutually exclusive, nor is "closed source" a synonym for "proprietary." You say tomato, I say ''mater. -- richard
Miles, thanks for helping clear up the confusion surrounding this subject! My decision is now as above: for my existing NAS to leave the pool as-is, and seek a 2+ SATA port card for the 2-drive mirror for 2 x 30GB SATA boot SSDs that I want to add. For the next NAS build later on this summer, I will go for an LSI 1068-based SAS/SATA configuration based on a PCIe expansion slot, rather than the ageing PCI-X slots. Using PCIe instead of PCI-X also opens up a load more possible motherboards, although as I want ECC support this still limits choices for mobos. I was thinking of using something like a Xeon E5504 (Nehalem) in the new NAS, and I''ve been hunting for a good, highly compatible mobo that will give the least aggro (trouble) with OpenSolaris, and this one looks good as it''s pretty much totally Intel chipsets, and it has an LSI SAS1068E, which I trust should be supported by Solaris, and it also has additional PCIe slots for additional future expansion, and basic onboard graphics chip, and dual Intel GbE NICs: SuperMicro X8STi-3F: http://www.supermicro.com/products/motherboard/Xeon3000/X58/X8STi-3F.cfm Any comments on this mobo welcome, plus suggestions for a possible PCIe-based 2+ port SATA card that is reliable and has a solid driver. Simon -- This message posted from opensolaris.org
> I think the confusion is because the 1068 can do "hardware" RAID, itcan and does write its own labels, as well as reserve space for replacements of disks with slightly different sizes. But that is only one mode of operation. So, it sounds like if I use a 1068-based device, and I *don''t* want it to write labels to the drives to allow easy portability of drives to a different controller, then I need to avoid the "RAID" mode of the device and instead force it to use JBOD mode. Is this easily selectable? I guess you just avoid the "Use RAID mode" option in the controller''s BIOS or something? -- This message posted from opensolaris.org
On Thu, 25 Jun 2009 15:43:17 -0700 (PDT) Simon Breden <no-reply at opensolaris.org> wrote:> > I think the confusion is because the 1068 can do "hardware" RAID, it > can and does write its own labels, as well as reserve space for replacements > of disks with slightly different sizes. But that is only one mode of > operation. > > So, it sounds like if I use a 1068-based device, and I *don''t* want it to write labels to the drives to allow easy portability of drives to a different controller, then I need to avoid the "RAID" mode of the device and instead force it to use JBOD mode. Is this easily selectable? I guess you just avoid the "Use RAID mode" option in the controller''s BIOS or something?It''s even simpler than that with the 1068 - just don''t use raidctl or the bios to create raid volumes and you''ll have a bunch of plain disks. No forcing required. James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog Kernel Conference Australia - http://au.sun.com/sunnews/events/2009/kernel
On Fri, Jun 26 at 8:55, James C. McPherson wrote:>On Thu, 25 Jun 2009 15:43:17 -0700 (PDT) >Simon Breden <no-reply at opensolaris.org> wrote: > >> > I think the confusion is because the 1068 can do "hardware" RAID, >> > it can and does write its own labels, as well as reserve space >> > for replacements of disks with slightly different sizes. But >> > that is only one mode of operation. >> >> So, it sounds like if I use a 1068-based device, and I *don''t* want >> it to write labels to the drives to allow easy portability of >> drives to a different controller, then I need to avoid the "RAID" >> mode of the device and instead force it to use JBOD mode. Is this >> easily selectable? I guess you just avoid the "Use RAID mode" >> option in the controller''s BIOS or something? > >It''s even simpler than that with the 1068 - just don''t use raidctl >or the bios to create raid volumes and you''ll have a bunch of plain >disks. No forcing required.Exactly. Worked as such out-of-the-box with no forcing of any kind for me. --eric -- Eric D. Mudama edmudama at mail.bounceswoosh.org
That sounds even better :) So what''s the procedure to create a zpool using the 1068? Also, any special ''tricks /tips'' / commands required for using a 1068-based SAS/SATA device? Simon -- This message posted from opensolaris.org
On Thu, 25 Jun 2009 16:11:04 -0700 (PDT) Simon Breden <no-reply at opensolaris.org> wrote:> That sounds even better :) > > So what''s the procedure to create a zpool using the 1068?same as any other device: # zpool create poolname vdev vdev vdev> Also, any special ''tricks /tips'' / commands required for using a 1068-based SAS/SATA device?no James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog Kernel Conference Australia - http://au.sun.com/sunnews/events/2009/kernel
OK, thanks James. -- This message posted from opensolaris.org
Simon Breden wrote:>> I think the confusion is because the 1068 can do "hardware" RAID, it >> > can and does write its own labels, as well as reserve space for replacements > of disks with slightly different sizes. But that is only one mode of > operation. > > So, it sounds like if I use a 1068-based device, and I *don''t* want it to write labels to the drives to allow easy portability of drives to a different controller, then I need to avoid the "RAID" mode of the device and instead force it to use JBOD mode. Is this easily selectable? I guess you just avoid the "Use RAID mode" option in the controller''s BIOS or something? >In the Sun onboard version of the 1068, the "JBOD" mode is the default. I don''t know about the add-in cards, but I suspect it''s the same. Worst case, you push Cntr-L (or whatever it prompts you for) at the BIOS initialization, and remove any RAID device it''s configured. With no RAID devices configured, it runs as a pure HBA (i.e. in JBOD mode). -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
Simon Breden wrote:> Miles, thanks for helping clear up the confusion surrounding this subject! > > My decision is now as above: for my existing NAS to leave the pool as-is, and seek a 2+ SATA port card for the 2-drive mirror for 2 x 30GB SATA boot SSDs that I want to add. > > For the next NAS build later on this summer, I will go for an LSI 1068-based SAS/SATA configuration based on a PCIe expansion slot, rather than the ageing PCI-X slots. > > Using PCIe instead of PCI-X also opens up a load more possible motherboards, although as I want ECC support this still limits choices for mobos. I was thinking of using something like a Xeon E5504 (Nehalem) in the new NAS, and I''ve been hunting for a good, highly compatible mobo that will give the least aggro (trouble) with OpenSolaris, and this one looks good as it''s pretty much totally Intel chipsets, and it has an LSI SAS1068E, which I trust should be supported by Solaris, and it also has additional PCIe slots for additional future expansion, and basic onboard graphics chip, and dual Intel GbE NICs: > SuperMicro X8STi-3F: http://www.supermicro.com/products/motherboard/Xeon3000/X58/X8STi-3F.cfm > > Any comments on this mobo welcome, plus suggestions for a possible PCIe-based 2+ port SATA card that is reliable and has a solid driver. > > Simon >Note that the X8STi-3F requires an L-bracket riser card to use both the PCI-E x16 and the x8 slot, which will be mounted horizontally (and, likely, limited to low-profile cards). You''d likely have to use a custom Supermicro case for this to work. Otherwise, you''re limited to the PCI-E x16 slot, in a standard vertical orientation. The board does have an IPMI-based KVM ethernet port, but I have no idea if it''s supported under Solaris. Also, remember, that you''ll have to order a Xeon CPU with this, NOT the i7 CPU, in order to get ECC memory support. Personally, I''d go for an AMD-based system, which is about the same cost, and a much better board: http://www.supermicro.com/Aplus/motherboard/Opteron2000/MCP55/H8DM3-2.cfm (comes with a 1068E SAS controller, AND the nVidia MCP55-based 6-port SATA controller, no need for any more PCI-cards, and it supports the add-in card for remote KVM console; it''s a dual-socket, Extended ATX size, though). The MCP55 is the chipset currently in use in the Sun X2200 M2 series of servers. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
> The MCP55 is the chipset currently in use in the Sun X2200 M2 series of > servers.... which has big problems with certain Samsung SATA disks. :-( So if you get such a board be sure to avoid Samsung 750GB and 1TB disks. Samsung never aknowledged the bug, nor have they released a firmware update. And nVidia never said anything about it either. Of course I only found out about it after buying lots of Samsung disks for our X2200s. Sigh... Regards -- Volker -- ------------------------------------------------------------------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: vab at bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgr??e: 45 Gesch?ftsf?hrer: Rainer J. H. Brandt und Volker A. Brandt
Volker A. Brandt wrote:>> The MCP55 is the chipset currently in use in the Sun X2200 M2 series of >> servers. >> > > ... which has big problems with certain Samsung SATA disks. :-( > > So if you get such a board be sure to avoid Samsung 750GB and > 1TB disks. Samsung never aknowledged the bug, nor have they released > a firmware update. And nVidia never said anything about it either. > Of course I only found out about it after buying lots of Samsung disks > for our X2200s. > > Sigh... > > > Regards -- Volker > -- > ------------------------------------------------------------------------ > Volker A. Brandt Consulting and Support for Sun Solaris > Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ > Am Wiesenpfad 6, 53340 Meckenheim Email: vab at bb-c.de > Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgr??e: 45 > Gesch?ftsf?hrer: Rainer J. H. Brandt und Volker A. Brandt >That is true, and it slipped my mind. Thanks for reminding me, Volker. I''m a Hitachi disk user myself, and they work swell. The Seagates I have in my X2200 M2 seem to work fine, as well. I''ve not tried any SSDs yet with the MCP55 - since they''re heavily Samsung under the hood (regardless of whose name is on the outside), I _hope_ it was just a HD-specific firmware bug. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
> > So if you get such a board be sure to avoid Samsung 750GB and > > 1TB disks. Samsung never aknowledged the bug, nor have they released > > a firmware update. And nVidia never said anything about it either.[...]> I''m a Hitachi disk user myself, and they work swell. The Seagates I have > in my X2200 M2 seem to work fine, as well.Yes, all HGST disks I''ve tried so far work just fine.> I''ve not tried any SSDs yet with the MCP55 - since they''re heavily > Samsung under the hood (regardless of whose name is on the outside), I > _hope_ it was just a HD-specific firmware bug.I think it is quite HD-specific. I have another, slightly older, 160GB Samsung disk that worked fine as root disk in the X2200M2. If you do try an SSD please let us know. :-) Regards -- Volker -- ------------------------------------------------------------------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: vab at bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgr??e: 45 Gesch?ftsf?hrer: Rainer J. H. Brandt und Volker A. Brandt