I have assembled my home RAID finally, and I think it looks rather good. http://www.lundman.net/gallery/v/lraid5/p1150547.jpg.html Feedback is welcome. I have yet to do proper speed tests, I will do so in the coming week should people be interested. Even though I have tried to use only existing, and cheap, parts the end sum became higher than I expected. Final price is somewhere in the 47,000 yen range. (Without hard disks) If I were to make and sell these, they would be 57,000 or so, so I do not really know if anyone would be interested. Especially since SOHO NAS devices seem to start around 80,000. Anyway, sure has been fun. Lund -- Jorgen Lundman | <lundman at lundman.net> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) Japan | +81 (0)3 -3375-1767 (home)
i used a 4u case for mine, it''s MASSIVE...i used this case here http://members.multiweb.nl/nan1/img/norco05.jpg http://www.newegg.com/Product/Product.aspx?Item=N82E16811219021 it''s an awesome case for the money...i plan to build another one soon. On Fri, Jul 31, 2009 at 8:22 AM, Jorgen Lundman <lundman at gmo.jp> wrote:> > I have assembled my home RAID finally, and I think it looks rather good. > > http://www.lundman.net/gallery/v/lraid5/p1150547.jpg.html > > Feedback is welcome. > > I have yet to do proper speed tests, I will do so in the coming week should > people be interested. > > Even though I have tried to use only existing, and cheap, parts the end sum > became higher than I expected. Final price is somewhere in the 47,000 yen > range. (Without hard disks) > > If I were to make and sell these, they would be 57,000 or so, so I do not > really know if anyone would be interested. Especially since SOHO NAS devices > seem to start around 80,000. > > Anyway, sure has been fun. > > Lund > > -- > Jorgen Lundman | <lundman at lundman.net> > Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) > Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) > Japan | +81 (0)3 -3375-1767 (home) > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090731/81d0acf6/attachment.html>
Yes, please write more about this. The photos are terrific and I appreciate the many useful observations you''ve made. For my home NAS I chose the Chenbro ES34069 and the biggest problem was finding a SATA/PCI card that would work with OpenSolaris and fit in the case (technically impossible without a ribbon cable PCI adapter). After seeing this, I may reconsider my choice. For the SATA card, you mentioned that it was a close fit with the case power switch. Would removing the backplane on the card have helped? Thanks n On Fri, Jul 31, 2009 at 5:22 AM, Jorgen Lundman<lundman at gmo.jp> wrote:> I have assembled my home RAID finally, and I think it looks rather good. > > http://www.lundman.net/gallery/v/lraid5/p1150547.jpg.html > > Feedback is welcome. > > I have yet to do proper speed tests, I will do so in the coming week should > people be interested. > > Even though I have tried to use only existing, and cheap, parts the end sum > became higher than I expected. Final price is somewhere in the 47,000 yen > range. (Without hard disks) > > If I were to make and sell these, they would be 57,000 or so, so I do not > really know if anyone would be interested. Especially since SOHO NAS devices > seem to start around 80,000. > > Anyway, sure has been fun. > > Lund
Finding a SATA card that would work with Solaris, and be hot-swap, and more than 4 ports, sure took a while. Oh and be reasonably priced ;) Double the price of the dual core Atom did not seem right. The SATA card was a close fit to the jumper were the power-switch cable attaches, as you can see in one of the photos. This is because the MV8 card is quite long, and has the big plastic SATA sockets. It does fit, but it was the tightest spot. I also picked the 5-in-3 drive cage that had the "shortest" depth listed, 190mm. For example the Supermicro M35T is 245mm, another 5cm. Not sure that would fit. Lund Nathan Fiedler wrote:> Yes, please write more about this. The photos are terrific and I > appreciate the many useful observations you''ve made. For my home NAS I > chose the Chenbro ES34069 and the biggest problem was finding a > SATA/PCI card that would work with OpenSolaris and fit in the case > (technically impossible without a ribbon cable PCI adapter). After > seeing this, I may reconsider my choice. > > For the SATA card, you mentioned that it was a close fit with the case > power switch. Would removing the backplane on the card have helped? > > Thanks > > n > > > On Fri, Jul 31, 2009 at 5:22 AM, Jorgen Lundman<lundman at gmo.jp> wrote: >> I have assembled my home RAID finally, and I think it looks rather good. >> >> http://www.lundman.net/gallery/v/lraid5/p1150547.jpg.html >> >> Feedback is welcome. >> >> I have yet to do proper speed tests, I will do so in the coming week should >> people be interested. >> >> Even though I have tried to use only existing, and cheap, parts the end sum >> became higher than I expected. Final price is somewhere in the 47,000 yen >> range. (Without hard disks) >> >> If I were to make and sell these, they would be 57,000 or so, so I do not >> really know if anyone would be interested. Especially since SOHO NAS devices >> seem to start around 80,000. >> >> Anyway, sure has been fun. >> >> Lund > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- Jorgen Lundman | <lundman at lundman.net> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) Japan | +81 (0)3 -3375-1767 (home)
Some preliminary speed tests, not too bad for a pci32 card. http://lundman.net/wiki/index.php/Lraid5_iozone Jorgen Lundman wrote:> > Finding a SATA card that would work with Solaris, and be hot-swap, and > more than 4 ports, sure took a while. Oh and be reasonably priced ;) > Double the price of the dual core Atom did not seem right. > > The SATA card was a close fit to the jumper were the power-switch cable > attaches, as you can see in one of the photos. This is because the MV8 > card is quite long, and has the big plastic SATA sockets. It does fit, > but it was the tightest spot. > > I also picked the 5-in-3 drive cage that had the "shortest" depth > listed, 190mm. For example the Supermicro M35T is 245mm, another 5cm. > Not sure that would fit. > > Lund > > > Nathan Fiedler wrote: >> Yes, please write more about this. The photos are terrific and I >> appreciate the many useful observations you''ve made. For my home NAS I >> chose the Chenbro ES34069 and the biggest problem was finding a >> SATA/PCI card that would work with OpenSolaris and fit in the case >> (technically impossible without a ribbon cable PCI adapter). After >> seeing this, I may reconsider my choice. >> >> For the SATA card, you mentioned that it was a close fit with the case >> power switch. Would removing the backplane on the card have helped? >> >> Thanks >> >> n >> >> >> On Fri, Jul 31, 2009 at 5:22 AM, Jorgen Lundman<lundman at gmo.jp> wrote: >>> I have assembled my home RAID finally, and I think it looks rather good. >>> >>> http://www.lundman.net/gallery/v/lraid5/p1150547.jpg.html >>> >>> Feedback is welcome. >>> >>> I have yet to do proper speed tests, I will do so in the coming week >>> should >>> people be interested. >>> >>> Even though I have tried to use only existing, and cheap, parts the >>> end sum >>> became higher than I expected. Final price is somewhere in the 47,000 >>> yen >>> range. (Without hard disks) >>> >>> If I were to make and sell these, they would be 57,000 or so, so I do >>> not >>> really know if anyone would be interested. Especially since SOHO NAS >>> devices >>> seem to start around 80,000. >>> >>> Anyway, sure has been fun. >>> >>> Lund >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >-- Jorgen Lundman | <lundman at lundman.net> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) Japan | +81 (0)3 -3375-1767 (home)
On Sat, 2009-08-01 at 22:31 +0900, Jorgen Lundman wrote:> Some preliminary speed tests, not too bad for a pci32 card. > > http://lundman.net/wiki/index.php/Lraid5_iozoneI don''t know anything about iozone, so the following may be NULL && void. I find the results suspect. 1.2GB/s read, and 500MB/s write ! These are impressive numbers indeed. I then looked at the file sizes that iozone used... How much memory do you have? I seems like the files would be able to comfortably fit in memory. I think this test needs to be re-run with Large files (ie >2*Memory size ) for them to give more accurate data. Unrelated, what did you use to generate those graphs, they look good. Also, do you have a hardware list on your site somewhere that I missed? I''d like to know more about the hardware. -- Louis-Fr?d?ric Feuillette <jebnor at gmail.com>
On Sat, 1 Aug 2009, Louis-Fr?d?ric Feuillette wrote:> I find the results suspect. 1.2GB/s read, and 500MB/s write ! These are > impressive numbers indeed. I then looked at the file sizes that iozone > used... How much memory do you have? I seems like the files would be > able to comfortably fit in memory. I think this test needs to be re-run > with Large files (ie >2*Memory size ) for them to give more accurate > data.The numbers are indeed "suspect" but the iozone sweep test is quite useful in order to see the influence of zfs''s caching via the ARC. The sweep should definitely be run to at least 2X the memory size.> Unrelated, what did you use to generate those graphs, they look good.Iozone output may be plotted via gnuplot or Microsoft Excel. This looks like the gnuplot output. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
I was following Toms Hardware on how they test NAS units. I have 2GB memory, so I will re-run the test at 4, if I figure out which option that is. I used Excel for the graphs in this case, gnuplot did not want to work. (Nor did Excel mind you) Bob Friesenhahn wrote:> On Sat, 1 Aug 2009, Louis-Fr?d?ric Feuillette wrote: > >> I find the results suspect. 1.2GB/s read, and 500MB/s write ! These are >> impressive numbers indeed. I then looked at the file sizes that iozone >> used... How much memory do you have? I seems like the files would be >> able to comfortably fit in memory. I think this test needs to be re-run >> with Large files (ie >2*Memory size ) for them to give more accurate >> data. > > The numbers are indeed "suspect" but the iozone sweep test is quite > useful in order to see the influence of zfs''s caching via the ARC. The > sweep should definitely be run to at least 2X the memory size. > >> Unrelated, what did you use to generate those graphs, they look good. > > Iozone output may be plotted via gnuplot or Microsoft Excel. This looks > like the gnuplot output. > > Bob > -- > Bob Friesenhahn > bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Jorgen Lundman | <lundman at lundman.net> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) Japan | +81 (0)3 -3375-1767 (home)
Ok I have redone the initial tests as 4G instead. Graphs are on the same place. http://lundman.net/wiki/index.php/Lraid5_iozone I also mounted it with nfsv3 and mounted it for more iozone. Alas, I started with 100mbit, so it has taken quite a while. It is constantly at 11MB/s though. ;) Jorgen Lundman wrote:> I was following Toms Hardware on how they test NAS units. I have 2GB > memory, so I will re-run the test at 4, if I figure out which option > that is. > > I used Excel for the graphs in this case, gnuplot did not want to work. > (Nor did Excel mind you) > > > Bob Friesenhahn wrote: >> On Sat, 1 Aug 2009, Louis-Fr?d?ric Feuillette wrote: >> >>> I find the results suspect. 1.2GB/s read, and 500MB/s write ! These are >>> impressive numbers indeed. I then looked at the file sizes that iozone >>> used... How much memory do you have? I seems like the files would be >>> able to comfortably fit in memory. I think this test needs to be re-run >>> with Large files (ie >2*Memory size ) for them to give more accurate >>> data. >> >> The numbers are indeed "suspect" but the iozone sweep test is quite >> useful in order to see the influence of zfs''s caching via the ARC. The >> sweep should definitely be run to at least 2X the memory size. >> >>> Unrelated, what did you use to generate those graphs, they look good. >> >> Iozone output may be plotted via gnuplot or Microsoft Excel. This >> looks like the gnuplot output. >> >> Bob >> -- >> Bob Friesenhahn >> bfriesen at simple.dallas.tx.us, >> http://www.simplesystems.org/users/bfriesen/ >> GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- Jorgen Lundman | <lundman at lundman.net> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) Japan | +81 (0)3 -3375-1767 (home)
100Mbit is quite flat at 11MB/s; http://lundman.net/wiki/index.php/Lraid5_iozone#Solaris_10_64-bit.2C_OsX_10.5.5_NFSv3.2C_100MBit.2C_ZIL_cache_disabled 1Gbit, MTU 1500; http://lundman.net/wiki/index.php/Lraid5_iozone#Solaris_10_64-bit.2C_OsX_10.5.5_NFSv3.2C_1GBit.2C_ZIL_cache_disabled Not sure how to enable jumbo frames on the rge0. When I use dladm set-linkprop -p mtu 9000 rge0 I get "operation not supported". PERM is "r--". Most likely I have to set it with rge.conf, and reboot, but I would need to rebuild my USB image for that. (unplumb, modunload, modload, plumb did not seem to enable it either). Jorgen Lundman wrote:> Ok I have redone the initial tests as 4G instead. Graphs are on the same > place. > > http://lundman.net/wiki/index.php/Lraid5_iozone > > I also mounted it with nfsv3 and mounted it for more iozone. Alas, I > started with 100mbit, so it has taken quite a while. It is constantly at > 11MB/s though. ;) > > > > Jorgen Lundman wrote: >> I was following Toms Hardware on how they test NAS units. I have 2GB >> memory, so I will re-run the test at 4, if I figure out which option >> that is. >> >> I used Excel for the graphs in this case, gnuplot did not want to >> work. (Nor did Excel mind you) >> >> >> Bob Friesenhahn wrote: >>> On Sat, 1 Aug 2009, Louis-Fr?d?ric Feuillette wrote: >>> >>>> I find the results suspect. 1.2GB/s read, and 500MB/s write ! These >>>> are >>>> impressive numbers indeed. I then looked at the file sizes that iozone >>>> used... How much memory do you have? I seems like the files would be >>>> able to comfortably fit in memory. I think this test needs to be >>>> re-run >>>> with Large files (ie >2*Memory size ) for them to give more accurate >>>> data. >>> >>> The numbers are indeed "suspect" but the iozone sweep test is quite >>> useful in order to see the influence of zfs''s caching via the ARC. >>> The sweep should definitely be run to at least 2X the memory size. >>> >>>> Unrelated, what did you use to generate those graphs, they look good. >>> >>> Iozone output may be plotted via gnuplot or Microsoft Excel. This >>> looks like the gnuplot output. >>> >>> Bob >>> -- >>> Bob Friesenhahn >>> bfriesen at simple.dallas.tx.us, >>> http://www.simplesystems.org/users/bfriesen/ >>> GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ >>> >>> >>> ------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >-- Jorgen Lundman | <lundman at lundman.net> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) Japan | +81 (0)3 -3375-1767 (home)
On Sun, Aug 2, 2009 at 3:42 AM, Jorgen Lundman <lundman at gmo.jp> wrote:> > 100Mbit is quite flat at 11MB/s; > > > http://lundman.net/wiki/index.php/Lraid5_iozone#Solaris_10_64-bit.2C_OsX_10.5.5_NFSv3.2C_100MBit.2C_ZIL_cache_disabled > > 1Gbit, MTU 1500; > > > http://lundman.net/wiki/index.php/Lraid5_iozone#Solaris_10_64-bit.2C_OsX_10.5.5_NFSv3.2C_1GBit.2C_ZIL_cache_disabled > > Not sure how to enable jumbo frames on the rge0. When I use > dladm set-linkprop -p mtu 9000 rge0 > > I get "operation not supported". PERM is "r--". > > Most likely I have to set it with rge.conf, and reboot, but I would need to > rebuild my USB image for that. (unplumb, modunload, modload, plumb did not > seem to enable it either). > >Your NIC may not support it. Realtek and Broadcom both make cheap, cheap chipsets that are gigE but do not support jumbo frames. Rather annoying in this day and age. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090802/cd3808d1/attachment.html>
Neal Pollack
2009-Aug-03 16:35 UTC
[zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS
On 07/31/09 06:12 PM, Jorgen Lundman wrote:> > Finding a SATA card that would work with Solaris, and be hot-swap, and > more than 4 ports, sure took a while. Oh and be reasonably priced ;)Let''s take this first point; "card that works with Solaris...." I might try to find some engineers to write device drivers to improve this situation. Would this alias be interested in teaching me which 3 or 4 cards they would put at the top of the "wish list" for Solaris support? I assume the current feature gap is defined as needing driver support for PCI-express add-in cards that have 4 to 8 ports inexpensive JBOD, not expensive HW RAID, and can handle hot-swap while running OS. Would this be correct? Neal> Double the price of the dual core Atom did not seem right. > > The SATA card was a close fit to the jumper were the power-switch > cable attaches, as you can see in one of the photos. This is because > the MV8 card is quite long, and has the big plastic SATA sockets. It > does fit, but it was the tightest spot. > > I also picked the 5-in-3 drive cage that had the "shortest" depth > listed, 190mm. For example the Supermicro M35T is 245mm, another 5cm. > Not sure that would fit. > > Lund > > > Nathan Fiedler wrote: >> Yes, please write more about this. The photos are terrific and I >> appreciate the many useful observations you''ve made. For my home NAS I >> chose the Chenbro ES34069 and the biggest problem was finding a >> SATA/PCI card that would work with OpenSolaris and fit in the case >> (technically impossible without a ribbon cable PCI adapter). After >> seeing this, I may reconsider my choice. >> >> For the SATA card, you mentioned that it was a close fit with the case >> power switch. Would removing the backplane on the card have helped? >> >> Thanks >> >> n >> >> >> On Fri, Jul 31, 2009 at 5:22 AM, Jorgen Lundman<lundman at gmo.jp> wrote: >>> I have assembled my home RAID finally, and I think it looks rather >>> good. >>> >>> http://www.lundman.net/gallery/v/lraid5/p1150547.jpg.html >>> >>> Feedback is welcome. >>> >>> I have yet to do proper speed tests, I will do so in the coming week >>> should >>> people be interested. >>> >>> Even though I have tried to use only existing, and cheap, parts the >>> end sum >>> became higher than I expected. Final price is somewhere in the >>> 47,000 yen >>> range. (Without hard disks) >>> >>> If I were to make and sell these, they would be 57,000 or so, so I >>> do not >>> really know if anyone would be interested. Especially since SOHO NAS >>> devices >>> seem to start around 80,000. >>> >>> Anyway, sure has been fun. >>> >>> Lund >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >
Nathan Fiedler
2009-Aug-03 17:28 UTC
[zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS
I have not carried out any research into this area, but when I was building my home server I wanted to use a Promise SATA-PCI card, but alas (Open)Solaris has no support at all for the Promise chipsets. Instead I used a rather old card based on the sil3124 chipset. n On Mon, Aug 3, 2009 at 9:35 AM, Neal Pollack<Neal.Pollack at sun.com> wrote:> Let''s take this first point; "card that works with Solaris...." > > I might try to find some engineers to write device drivers to > improve this situation. > Would this alias be interested in teaching me which 3 or 4 cards they would > put at the top of the "wish list" for Solaris support? > I assume the current feature gap is defined as needing driver support > for PCI-express add-in cards that have 4 to 8 ports inexpensive > JBOD, not expensive HW RAID, and can handle hot-swap while running OS. > Would this be correct?
I have the same case which I use as directed attached storage. I never thought about using it with a motherboard inside. Could you provide a complete parts list? What sort of temperatures at the chip, chipset, and drives did you find? Thanks! -- This message posted from opensolaris.org
The case is made by Chyangfun, and the model made for Mini-ITX motherboards is called CGN-S40X. They had 6 pcs left last I talked to them, and need 3 week lead for more if I understand it correctly. I need to finish my LCD panel work before I will open shop to sell these. As for temperature, I have only check the server HDDs so far (on my wiki) but will test with green HDDs tonight. I do not know if Solaris can retrieve the Atom chipset temperature readings. The parts I used should be listed on my wiki. Anon wrote:> I have the same case which I use as directed attached storage. I never thought about using it with a motherboard inside. > > Could you provide a complete parts list? > > What sort of temperatures at the chip, chipset, and drives did you find? > > Thanks!-- Jorgen Lundman | <lundman at lundman.net> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) Japan | +81 (0)3 -3375-1767 (home)
Robin Bowes
2009-Aug-29 14:02 UTC
[zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS
On 03/08/09 17:35, Neal Pollack wrote:> On 07/31/09 06:12 PM, Jorgen Lundman wrote: >> >> Finding a SATA card that would work with Solaris, and be hot-swap, and >> more than 4 ports, sure took a while. Oh and be reasonably priced ;) > > Let''s take this first point; "card that works with Solaris...." > > I might try to find some engineers to write device drivers to > improve this situation. > Would this alias be interested in teaching me which 3 or 4 cards they would > put at the top of the "wish list" for Solaris support? > I assume the current feature gap is defined as needing driver support > for PCI-express add-in cards that have 4 to 8 ports inexpensive > JBOD, not expensive HW RAID, and can handle hot-swap while running OS. > Would this be correct?That would be correct, except I don''t know any cheap, 4- to 8-port PCIe SATA cards. I''m still finding that the Supermicro PCI-X 8-port cards are the cheapest option. But they require PCI-X slot for optimal performance, which generally means a pricey mobo. R.
Maurilio Longo
2009-Aug-30 16:00 UTC
[zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS
Robin, LSI 3041er and 3081er are pci-e 4 and 8 ports sata cards; they are not hot-swap capable, as far as I know, but do work very well (I''m using several of them) in jbod and they''re not too expensive. See this http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3041er/index.html -- This message posted from opensolaris.org
Brandon High
2009-Sep-01 01:06 UTC
[zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS
On Sun, Aug 30, 2009 at 9:00 AM, Maurilio Longo<maurilio.longo at libero.it> wrote:> Robin, > > LSI 3041er and 3081er are pci-e 4 and 8 ports sata cards; they are not hot-swap capable, as far as I know, but do work very well (I''m using several of them) in jbod and they''re not too expensive.Supermicro has an inexpensive PCI-e SAS card available now: http://supermicro.com/products/accessories/addon/AOC-SASLP-MV8.cfm You can also use something like the AOC-USAS-L8i, which costs about the same. It''s a ULI, so the components are on the "wrong" side of the board, but it''s still just PCIe electrically. -B -- Brandon High : bhigh at freaks.com
Tim Cook
2009-Sep-01 01:14 UTC
[zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS
On Mon, Aug 31, 2009 at 8:06 PM, Brandon High <bhigh at freaks.com> wrote:> On Sun, Aug 30, 2009 at 9:00 AM, Maurilio Longo<maurilio.longo at libero.it> > wrote: > > Robin, > > > > LSI 3041er and 3081er are pci-e 4 and 8 ports sata cards; they are not > hot-swap capable, as far as I know, but do work very well (I''m using several > of them) in jbod and they''re not too expensive. > > Supermicro has an inexpensive PCI-e SAS card available now: > > http://supermicro.com/products/accessories/addon/AOC-SASLP-MV8.cfm > > You can also use something like the AOC-USAS-L8i, which costs about > the same. It''s a ULI, so the components are on the "wrong" side of the > board, but it''s still just PCIe electrically. > > -B > >The mv8 is a marvell based chipset, and it appears there are no Solaris drivers for it. There doesn''t appear to be any movement from Sun or marvell to provide any either. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090831/825e6064/attachment.html>
Jorgen Lundman
2009-Sep-01 01:26 UTC
[zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS
> The mv8 is a marvell based chipset, and it appears there are no Solaris > drivers for it. There doesn''t appear to be any movement from Sun or > marvell to provide any either. >Do you mean specifically Marvell 6480 drivers? I use both DAC-SATA-MV8 and AOC-SAT2-MV8, which use Marvell MV88SX and works very well in Solaris. (Package SUNWmv88sx). Lund -- Jorgen Lundman | <lundman at lundman.net> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) Japan | +81 (0)3 -3375-1767 (home)
Tim Cook
2009-Sep-01 01:48 UTC
[zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS
On Mon, Aug 31, 2009 at 8:26 PM, Jorgen Lundman <lundman at gmo.jp> wrote:> The mv8 is a marvell based chipset, and it appears there are no Solaris >> drivers for it. There doesn''t appear to be any movement from Sun or marvell >> to provide any either. >> >> > Do you mean specifically Marvell 6480 drivers? I use both DAC-SATA-MV8 and > AOC-SAT2-MV8, which use Marvell MV88SX and works very well in Solaris. > (Package SUNWmv88sx). > > Lund > > -- > Jorgen Lundman | <lundman at lundman.net> > Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) > Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) > Japan | +81 (0)3 -3375-1767 (home) > >Interesting, there was a big thread that this card was in over at hardocp, and they said with 2009.06 it didn''t work. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090831/782df7a2/attachment.html>
James Andrewartha
2009-Sep-01 07:26 UTC
[zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS
Jorgen Lundman wrote:>> The mv8 is a marvell based chipset, and it appears there are no >> Solaris drivers for it. There doesn''t appear to be any movement from >> Sun or marvell to provide any either. > > Do you mean specifically Marvell 6480 drivers? I use both DAC-SATA-MV8 > and AOC-SAT2-MV8, which use Marvell MV88SX and works very well in > Solaris. (Package SUNWmv88sx).They''re PCI-X SATA cards, the AOC-SASLP-MV8 is a PCIe SAS card and has no (Open)Solaris driver. -- James Andrewartha
Robin Bowes
2009-Nov-10 15:36 UTC
[zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS
On 01/09/09 08:26, James Andrewartha wrote:> Jorgen Lundman wrote: >>> The mv8 is a marvell based chipset, and it appears there are no >>> Solaris drivers for it. There doesn''t appear to be any movement from >>> Sun or marvell to provide any either. >> >> Do you mean specifically Marvell 6480 drivers? I use both DAC-SATA-MV8 >> and AOC-SAT2-MV8, which use Marvell MV88SX and works very well in >> Solaris. (Package SUNWmv88sx). > > They''re PCI-X SATA cards, the AOC-SASLP-MV8 is a PCIe SAS card and has > no (Open)Solaris driver. >Shame, I was just thinking that this was a nice looking card that could replace my AOC-SAT2-MV8s. Ah well... R.