Hey guys, please excuse me in advance if I say or ask anything stupid :) Anyway, Solaris newbie here. I''ve built for myself a new file server to use at home, in which I''m planning on configuring SXCE-89 & ZFS. It''s a Supermicro C2SBX motherboard with a Core2Duo & 4GB DDR3. I have 6x750GB SATA drives in it connected to the onboard ICH9-R controller (with BIOS RAID disabled & AHCI enabled). I also have a 160GB SATA drive connected to a PCI SIIG SC-SA0012-S1 controller, the drive which will be used as the system drive. My plan is to configure a RAID-Z2 pool on the 6x750 drives. The system drive is just there for Solaris. I''m also out of ports to use on the motherboard, hence why I''m using an add-in PCI SATA controller. My problem is that Solaris is not recognizing the system drive during the DVD install procedure. It sees the 6x750GB onboard drives fine. I originally used a RocketRAID 1720 SATA controller, which uses its own HighPoint chipset I believe, and it was a no-go. I went and exchanged that controller for a SIIG SC-SA0012-S1 controller, which I thought used a Silicon Integrated (SII) chipset. The install DVD isn''t recognizing it unfortunatly, & now I''m not so sure that it uses a SII chipset. I checked the HCL, and it only lists a few cards that are reported to work under SXCE. If anyone has any suggestions on either... A) Using a different driver during the install procedure, or... B) A different, cheap SATA controller I''d appreciate it very much. Sorry for the rambling post, but I wanted to be detailed from the get-go. Thanks for any input! :) PS. On a side note, I''m interested in playing around with SXCE development. It looks interesting :) This message posted from opensolaris.org
On Thu, Jun 5, 2008 at 9:17 PM, Peeyush Singh <peeyush.singh at gmail.com> wrote:> Hey guys, please excuse me in advance if I say or ask anything stupid :) > > Anyway, Solaris newbie here. I''ve built for myself a new file server to > use at home, in which I''m planning on configuring SXCE-89 & ZFS. It''s a > Supermicro C2SBX motherboard with a Core2Duo & 4GB DDR3. I have 6x750GB > SATA drives in it connected to the onboard ICH9-R controller (with BIOS RAID > disabled & AHCI enabled). I also have a 160GB SATA drive connected to a PCI > SIIG SC-SA0012-S1 controller, the drive which will be used as the system > drive. My plan is to configure a RAID-Z2 pool on the 6x750 drives. The > system drive is just there for Solaris. I''m also out of ports to use on the > motherboard, hence why I''m using an add-in PCI SATA controller. > > My problem is that Solaris is not recognizing the system drive during the > DVD install procedure. It sees the 6x750GB onboard drives fine. I > originally used a RocketRAID 1720 SATA controller, which uses its own > HighPoint chipset I believe, and it was a no-go. I went and exchanged that > controller for a SIIG SC-SA0012-S1 controller, which I thought used a > Silicon Integrated (SII) chipset. The install DVD isn''t recognizing it > unfortunatly, & now I''m not so sure that it uses a SII chipset. I checked > the HCL, and it only lists a few cards that are reported to work under SXCE. > > If anyone has any suggestions on either... > A) Using a different driver during the install procedure, or... > B) A different, cheap SATA controller > > I''d appreciate it very much. Sorry for the rambling post, but I wanted to > be detailed from the get-go. Thanks for any input! :) > > PS. On a side note, I''m interested in playing around with SXCE development. > It looks interesting :) > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >I''m still a fan of the marvell based supermicro card. I run two of them in my fileserver. AOC-SAT2-MV8 http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm It''s the same chipset that''s in the thumper, and it pretty cheap for an 8-port card. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080605/2a160474/attachment.html>
On Thu, Jun 5, 2008 at 8:16 PM, Tim <tim at tcsac.net> wrote:> > > On Thu, Jun 5, 2008 at 9:17 PM, Peeyush Singh <peeyush.singh at gmail.com> > wrote: >> >> Hey guys, please excuse me in advance if I say or ask anything stupid :) >> >> Anyway, Solaris newbie here. I''ve built for myself a new file server to >> use at home, in which I''m planning on configuring SXCE-89 & ZFS. It''s a >> Supermicro C2SBX motherboard with a Core2Duo & 4GB DDR3. I have 6x750GB >> SATA drives in it connected to the onboard ICH9-R controller (with BIOS RAID >> disabled & AHCI enabled). I also have a 160GB SATA drive connected to a PCI >> SIIG SC-SA0012-S1 controller, the drive which will be used as the system >> drive. My plan is to configure a RAID-Z2 pool on the 6x750 drives. The >> system drive is just there for Solaris. I''m also out of ports to use on the >> motherboard, hence why I''m using an add-in PCI SATA controller. >> >> My problem is that Solaris is not recognizing the system drive during the >> DVD install procedure. It sees the 6x750GB onboard drives fine. I >> originally used a RocketRAID 1720 SATA controller, which uses its own >> HighPoint chipset I believe, and it was a no-go. I went and exchanged that >> controller for a SIIG SC-SA0012-S1 controller, which I thought used a >> Silicon Integrated (SII) chipset. The install DVD isn''t recognizing it >> unfortunatly, & now I''m not so sure that it uses a SII chipset. I checked >> the HCL, and it only lists a few cards that are reported to work under SXCE. >> >> If anyone has any suggestions on either... >> A) Using a different driver during the install procedure, or... >> B) A different, cheap SATA controller >> >> I''d appreciate it very much. Sorry for the rambling post, but I wanted to >> be detailed from the get-go. Thanks for any input! :) >> >> PS. On a side note, I''m interested in playing around with SXCE >> development. It looks interesting :) >> >> >> This message posted from opensolaris.org >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > I''m still a fan of the marvell based supermicro card. I run two of them in > my fileserver. AOC-SAT2-MV8 > > http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm >I gave treatment to this question a few days ago. Yes, if you want PCI-X, go with the Marvell. If you want PCIe SATA, then its either a SIIG produced Si3124 card or a lot of guessing. I think the real winner is going to be the newer SAS/SATA mixed HBAs from LSI based on the 1068 chipset, which Sun has been supporting well in newer hardware. http://jmlittle.blogspot.com/2008/06/recommended-disk-controllers-for-zfs.html Equally important, don''t mix SATA-I and SATA-II on that system motherboard, or on one of those add-on cards. http://jmlittle.blogspot.com/2008/05/mixing-sata-dos-and-donts.html> It''s the same chipset that''s in the thumper, and it pretty cheap for an > 8-port card. > > --Tim > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >
On Thu, Jun 5, 2008 at 11:12 PM, Joe Little <jmlittle at gmail.com> wrote:> On Thu, Jun 5, 2008 at 8:16 PM, Tim <tim at tcsac.net> wrote: > > > > > > On Thu, Jun 5, 2008 at 9:17 PM, Peeyush Singh <peeyush.singh at gmail.com> > > wrote: > >> > >> Hey guys, please excuse me in advance if I say or ask anything stupid :) > >> > >> Anyway, Solaris newbie here. I''ve built for myself a new file server to > >> use at home, in which I''m planning on configuring SXCE-89 & ZFS. It''s a > >> Supermicro C2SBX motherboard with a Core2Duo & 4GB DDR3. I have 6x750GB > >> SATA drives in it connected to the onboard ICH9-R controller (with BIOS > RAID > >> disabled & AHCI enabled). I also have a 160GB SATA drive connected to a > PCI > >> SIIG SC-SA0012-S1 controller, the drive which will be used as the system > >> drive. My plan is to configure a RAID-Z2 pool on the 6x750 drives. The > >> system drive is just there for Solaris. I''m also out of ports to use on > the > >> motherboard, hence why I''m using an add-in PCI SATA controller. > >> > >> My problem is that Solaris is not recognizing the system drive during > the > >> DVD install procedure. It sees the 6x750GB onboard drives fine. I > >> originally used a RocketRAID 1720 SATA controller, which uses its own > >> HighPoint chipset I believe, and it was a no-go. I went and exchanged > that > >> controller for a SIIG SC-SA0012-S1 controller, which I thought used a > >> Silicon Integrated (SII) chipset. The install DVD isn''t recognizing it > >> unfortunatly, & now I''m not so sure that it uses a SII chipset. I > checked > >> the HCL, and it only lists a few cards that are reported to work under > SXCE. > >> > >> If anyone has any suggestions on either... > >> A) Using a different driver during the install procedure, or... > >> B) A different, cheap SATA controller > >> > >> I''d appreciate it very much. Sorry for the rambling post, but I wanted > to > >> be detailed from the get-go. Thanks for any input! :) > >> > >> PS. On a side note, I''m interested in playing around with SXCE > >> development. It looks interesting :) > >> > >> > >> This message posted from opensolaris.org > >> _______________________________________________ > >> zfs-discuss mailing list > >> zfs-discuss at opensolaris.org > >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > > > I''m still a fan of the marvell based supermicro card. I run two of them > in > > my fileserver. AOC-SAT2-MV8 > > > > http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm > > > > I gave treatment to this question a few days ago. Yes, if you want > PCI-X, go with the Marvell. If you want PCIe SATA, then its either a > SIIG produced Si3124 card or a lot of guessing. I think the real > winner is going to be the newer SAS/SATA mixed HBAs from LSI based on > the 1068 chipset, which Sun has been supporting well in newer > hardware. > > > http://jmlittle.blogspot.com/2008/06/recommended-disk-controllers-for-zfs.html**pci or pci-x. Yes, you might see *SOME* loss in speed from a pci interface, but let''s be honest, there aren''t a whole lot of users on this list that have the infrastructure to use greater than 100MB/sec who are asking this sort of question. A PCI bus should have no issues pushing that.> > > Equally important, don''t mix SATA-I and SATA-II on that system > motherboard, or on one of those add-on cards. > > http://jmlittle.blogspot.com/2008/05/mixing-sata-dos-and-donts.html > >I mix SATA-I and SATA-II and haven''t had any issues to date. Unless you have an official bug logged/linked, that''s as good as a wives tail. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080605/7ccb9c36/attachment.html>
I don''t presently have any working x86 hardware, nor do I routinely work with x86 hardware configurations. But it''s not hard to find previous discussion on the subject: http://www.opensolaris.org/jive/thread.jspa?messageID=96790 for example... Also, remember that SAS controllers can usually also talk to SATA drives; they''re usually more expensive of course, but sometimes you can find a deal. I have a LSI SAS 3800x, and I paid a heck of a lot less than list for it (eBay), I''m guessing because someone bought the bulk package and sold off whatever they didn''t need (new board, sealed, but no docs). That was a while ago, and being around US $100, it might still not have been what you''d call cheap. If you want < $50, you might have better luck looking at the earlier discussion. But I suspect to some extent you get what you pay for; the throughput on the higher-end boards may well be a good bit higher, although for one disk (or even two, to mirror the system disk), it might not matter so much. This message posted from opensolaris.org
Buy a 2-port SATA II PCI-E x1 SiI3132 controller ($20). The solaris driver is very stable. Or, a solution I would personally prefer, don''t use a 7th disk. Partition each of your 6 disks with a small ~7-GB slice at the beginning and the rest of the disk for ZFS. Install the OS in one of the small slices. This will only reduce your usable ZFS storage space by <1% (and you may have to manually enable write cache because ZFS won''t be given entire disks, only slices) but: (1) you save a disk and a controller and money and related hassles (the reason why you post here :P), (2) you can mirror your OS on the other small slices using SVM or a ZFS mirror to improve reliability, and (3) this setup allows you to easily experiment with parallel installs of different opensolaris versions in the other slices. -marc
Richard L. Hamilton <rlhamil <at> smart.net> writes:> But I suspect to some extent you get what you pay for; the throughput on the > higher-end boards may well be a good bit higher.Not really. Nowadays, even the cheapest controllers, processors & mobos are EASILY capable of handling the platter-speed throughput of up to 8-10 disks. http://opensolaris.org/jive/thread.jspa?threadID=54481 -marc
>**pci or pci-x.? Yes, you might see > *SOME* loss in speed from a pci interface, but > let''s be honest, there aren''t a whole lot of > users on this list that have the infrastructure to > use greater than 100MB/sec who are asking this sort > of question.? A PCI bus should have no issues > pushing that.<br>I have an AMD 939 MB w/ Nvidea on the motherboard and 4 500GB SATA II drives in a RAIDZ. I have the $20 Syba SATA I PCI card with 4 120GB drives in another RAIDZ. I get 550 MB/s on the 1st and 82 MB/s on the 2nd in the local system.>From another, faster system, over Gigabit NFS I get 69MB/s and 35MB/s.I''m taking a big hit from the SATA ! PCI card vs the motherboard SATA II it seems. This message posted from opensolaris.org
On Fri, Jun 6, 2008 at 3:23 PM, Tom Buskey <tom at buskey.name> wrote:> >**pci or pci-x. Yes, you might see > > *SOME* loss in speed from a pci interface, but > > let''s be honest, there aren''t a whole lot of > > users on this list that have the infrastructure to > > use greater than 100MB/sec who are asking this sort > > of question. A PCI bus should have no issues > > pushing that.<br> > > I have an AMD 939 MB w/ Nvidea on the motherboard and 4 500GB SATA II > drives in a RAIDZ. > I have the $20 Syba SATA I PCI card with 4 120GB drives in another RAIDZ. > > I get 550 MB/s on the 1st and 82 MB/s on the 2nd in the local system. > From another, faster system, over Gigabit NFS I get 69MB/s and 35MB/s. > > I''m taking a big hit from the SATA ! PCI card vs the motherboard SATA II it > seems. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >That has FAR, FAR more to do with the drives and crappy card than the interface. I have no issues maxing out a gigE link with a marvell card on a PCI bus. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080606/f4400761/attachment.html>
On Fri, Jun 6, 2008 at 16:23, Tom Buskey <tom at buskey.name> wrote:> I have an AMD 939 MB w/ Nvidea on the motherboard and 4 500GB SATA II drives in a RAIDZ....> I get 550 MB/sI doubt this number a lot. That''s almost 200 (550/N-1 = 183) MB/s per disk, and drives I''ve seen are usually more in the neighborhood of 80 MB/s. How did you come up with this number? What benchmark did you run? While it''s executing, what does "zpool iostat mypool 10" show? Will
On Thu, Jun 5, 2008 at 9:26 PM, Tim <tim at tcsac.net> wrote:> > > On Thu, Jun 5, 2008 at 11:12 PM, Joe Little <jmlittle at gmail.com> wrote: >> >> On Thu, Jun 5, 2008 at 8:16 PM, Tim <tim at tcsac.net> wrote: >> > >> > >> > On Thu, Jun 5, 2008 at 9:17 PM, Peeyush Singh <peeyush.singh at gmail.com> >> > wrote: >> >> >> >> Hey guys, please excuse me in advance if I say or ask anything stupid >> >> :) >> >> >> >> Anyway, Solaris newbie here. I''ve built for myself a new file server >> >> to >> >> use at home, in which I''m planning on configuring SXCE-89 & ZFS. It''s >> >> a >> >> Supermicro C2SBX motherboard with a Core2Duo & 4GB DDR3. I have >> >> 6x750GB >> >> SATA drives in it connected to the onboard ICH9-R controller (with BIOS >> >> RAID >> >> disabled & AHCI enabled). I also have a 160GB SATA drive connected to >> >> a PCI >> >> SIIG SC-SA0012-S1 controller, the drive which will be used as the >> >> system >> >> drive. My plan is to configure a RAID-Z2 pool on the 6x750 drives. >> >> The >> >> system drive is just there for Solaris. I''m also out of ports to use >> >> on the >> >> motherboard, hence why I''m using an add-in PCI SATA controller. >> >> >> >> My problem is that Solaris is not recognizing the system drive during >> >> the >> >> DVD install procedure. It sees the 6x750GB onboard drives fine. I >> >> originally used a RocketRAID 1720 SATA controller, which uses its own >> >> HighPoint chipset I believe, and it was a no-go. I went and exchanged >> >> that >> >> controller for a SIIG SC-SA0012-S1 controller, which I thought used a >> >> Silicon Integrated (SII) chipset. The install DVD isn''t recognizing it >> >> unfortunatly, & now I''m not so sure that it uses a SII chipset. I >> >> checked >> >> the HCL, and it only lists a few cards that are reported to work under >> >> SXCE. >> >> >> >> If anyone has any suggestions on either... >> >> A) Using a different driver during the install procedure, or... >> >> B) A different, cheap SATA controller >> >> >> >> I''d appreciate it very much. Sorry for the rambling post, but I wanted >> >> to >> >> be detailed from the get-go. Thanks for any input! :) >> >> >> >> PS. On a side note, I''m interested in playing around with SXCE >> >> development. It looks interesting :) >> >> >> >> >> >> This message posted from opensolaris.org >> >> _______________________________________________ >> >> zfs-discuss mailing list >> >> zfs-discuss at opensolaris.org >> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > >> > >> > I''m still a fan of the marvell based supermicro card. I run two of them >> > in >> > my fileserver. AOC-SAT2-MV8 >> > >> > http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm >> > >> >> I gave treatment to this question a few days ago. Yes, if you want >> PCI-X, go with the Marvell. If you want PCIe SATA, then its either a >> SIIG produced Si3124 card or a lot of guessing. I think the real >> winner is going to be the newer SAS/SATA mixed HBAs from LSI based on >> the 1068 chipset, which Sun has been supporting well in newer >> hardware. >> >> >> http://jmlittle.blogspot.com/2008/06/recommended-disk-controllers-for-zfs.html > > **pci or pci-x. Yes, you might see *SOME* loss in speed from a pci > interface, but let''s be honest, there aren''t a whole lot of users on this > list that have the infrastructure to use greater than 100MB/sec who are > asking this sort of question. A PCI bus should have no issues pushing that. > > >> >> >> Equally important, don''t mix SATA-I and SATA-II on that system >> motherboard, or on one of those add-on cards. >> >> http://jmlittle.blogspot.com/2008/05/mixing-sata-dos-and-donts.html >> > > I mix SATA-I and SATA-II and haven''t had any issues to date. Unless you > have an official bug logged/linked, that''s as good as a wives tail.No bug to report, but it was one of the issues with losing my log device a bit ago. ZFS engineers appear to be aware of it. Among other things, its why there is a known work around to disable command queueing (NCQ) on the marvell card when SATA-I drives are attached to it.> >
Tim wrote:> > > > **pci or pci-x. Yes, you might see *SOME* loss in speed from a pci > interface, but let''s be honest, there aren''t a whole lot of users on > this list that have the infrastructure to use greater than 100MB/sec who > are asking this sort of question. A PCI bus should have no issues > pushing that. >Hm. If it''s a system with only 1 PCI bus, there are still a few things to consider here. If it''s plain old 33mhz, 32 bit PCI your 100MB/s(ish) usable bandwidth is actually total bandwidth. That''s 50MB/s in and 50MB/s out, if you are copying disk to disk... I am about to update my home server for exactly the issue of saturating my PCI bus... It''s even worse for me, as I''m mirroring, so, that works out to closer to 33MB/s read, 33MB/s write + 33 MB/s write to the mirror. All in all, it blows. I''m looking into one of the new gigabyte NVIDIA based systems with the 750aSLI chipsets. I''m *hoping* the Solaris nv_sata drivers will work with the new chipset (or that we are on the way to updating them...). My other box that''s using the Nforce 570 works like a champ, and I''m hoping to recapture that magic. (I actually wanted to buy some more 570 based MB''s but cannot get ''em in Australia any more... :) Cheers! Nathan.
> On Fri, Jun 6, 2008 at 16:23, Tom Buskey > <tom at buskey.name> wrote: > > I have an AMD 939 MB w/ Nvidea on the motherboard > and 4 500GB SATA II drives in a RAIDZ. > ... > > I get 550 MB/s > I doubt this number a lot. That''s almost 200 > (550/N-1 = 183) MB/s per > disk, and drives I''ve seen are usually more in the > neighborhood of 80 > MB/s. How did you come up with this number? What > benchmark did you > run? While it''s executing, what does "zpool iostat > mypool 10" show?time gdd if=/dev/zero bs=1048576 count=10240 of=/data/video/x real 0m13.503s user 0m0.016s sys 0m8.981s This message posted from opensolaris.org
On Mon, Jun 9, 2008 at 7:16 AM, Tom Buskey <tom at buskey.name> wrote:> > On Fri, Jun 6, 2008 at 16:23, Tom Buskey > > <tom at buskey.name> wrote: > > > I have an AMD 939 MB w/ Nvidea on the motherboard > > and 4 500GB SATA II drives in a RAIDZ. > > ... > > > I get 550 MB/s > > I doubt this number a lot. That''s almost 200 > > (550/N-1 = 183) MB/s per > > disk, and drives I''ve seen are usually more in the > > neighborhood of 80 > > MB/s. How did you come up with this number? What > > benchmark did you > > run? While it''s executing, what does "zpool iostat > > mypool 10" show? > > > time gdd if=/dev/zero bs=1048576 count=10240 of=/data/video/x > > real 0m13.503s > user 0m0.016s > sys 0m8.981s > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >What''s bonnie++ have to say? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080609/e7ec8d7c/attachment.html>
Tom Buskey schrieb:>> On Fri, Jun 6, 2008 at 16:23, Tom Buskey >> <tom at buskey.name> wrote: >>> I have an AMD 939 MB w/ Nvidea on the motherboard >> and 4 500GB SATA II drives in a RAIDZ. >> ... >>> I get 550 MB/s >> I doubt this number a lot. That''s almost 200 >> (550/N-1 = 183) MB/s per >> disk, and drives I''ve seen are usually more in the >> neighborhood of 80 >> MB/s. How did you come up with this number? What >> benchmark did you >> run? While it''s executing, what does "zpool iostat >> mypool 10" show? > > > time gdd if=/dev/zero bs=1048576 count=10240 of=/data/video/x > > real 0m13.503s > user 0m0.016s > sys 0m8.981s > >Are you sure gdd doesn''t create a sparse file? - Thomas
On 9 Jun 2008, at 14:59, Thomas Maier-Komor wrote:>> time gdd if=/dev/zero bs=1048576 count=10240 of=/data/video/x >> >> real 0m13.503s >> user 0m0.016s >> sys 0m8.981s >> >> > > Are you sure gdd doesn''t create a sparse file?One would presumably expect it to be instantaneous if it was creating a sparse file. It''s not a compressed filesystem though is it? /dev/ zero tends to be fairly compressible ;-) I think, as someone else pointed out, running zpool iostat at the same time might be the best way to see what''s really happening. Jonathan
> > time gdd if=/dev/zero bs=1048576 count=10240 > of=/data/video/x > > real 0m13.503s > user 0m0.016s > sys 0m8.981sAs someone pointed out, this is a compressed file system :-) I''ll have to get a copy of Bonnie++ or some such to get more accurate numbers This message posted from opensolaris.org
On Mon, 9 Jun 2008, Tom Buskey wrote:> > time gdd if=/dev/zero bs=1048576 count=10240 of=/data/video/x > > real 0m13.503s > user 0m0.016s > sys 0m8.981sThese results are not quite valid. /dev/zero only produces null bytes which allows zfs to store the data as sparse files (i.e. write less actual data to disk). The size of the data written should be at least 2X (preferably a lot more) than the size of installed RAM since otherwise the write may be simply cached to RAM with very little I/O. I suggest using iozone (http://www.iozone.org/) for doing this sort of testing. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Mon, 9 Jun 2008, Jonathan Hogg wrote:>> Are you sure gdd doesn''t create a sparse file? > > One would presumably expect it to be instantaneous if it was creating > a sparse file. It''s not a compressed filesystem though is it? /dev/ > zero tends to be fairly compressible ;-)/dev/zero does not have infinite performance. Dd will perform at least one extra data copy in memory. Since zfs computes checksums it needs to inspect all of the bytes in what is written. As a result, zfs will easily know if the block is all zeros and even if the data is all zeros, time will be consumed. On my system, Solaris dd does not seem to create a sparse file. I don''t have GNU dd installed to test with. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Bob Friesenhahn <bfriesen at simple.dallas.tx.us> wrote:> /dev/zero does not have infinite performance. Dd will perform at > least one extra data copy in memory. Since zfs computes checksums it > needs to inspect all of the bytes in what is written. As a result, > zfs will easily know if the block is all zeros and even if the data is > all zeros, time will be consumed. > > On my system, Solaris dd does not seem to create a sparse file. I > don''t have GNU dd installed to test with.I did not read the older messages in this thread, but: dd skip=n skips n records on input dd seek=n seeks n records on output Whenever you use "dd ... of=name seek=something" you will have the chance to get a sparse file (depending on the parameters of the underlying filesystem). J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
On Thu, Jun 5, 2008 at 9:12 PM, Joe Little <jmlittle at gmail.com> wrote:> winner is going to be the newer SAS/SATA mixed HBAs from LSI based on > the 1068 chipset, which Sun has been supporting well in newer > hardware. > > http://jmlittle.blogspot.com/2008/06/recommended-disk-controllers-for-zfs.htmlJoe -- What about the LSISAS3081E-R? Does it use the same drivers as the other LSI controllers? http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3081er/index.html -B -- Brandon High bhigh at freaks.com "The good is the enemy of the best." - Nietzsche
Brandon High wrote:> On Thu, Jun 5, 2008 at 9:12 PM, Joe Little <jmlittle at gmail.com> wrote: >> winner is going to be the newer SAS/SATA mixed HBAs from LSI based on >> the 1068 chipset, which Sun has been supporting well in newer >> hardware. >> >> http://jmlittle.blogspot.com/2008/06/recommended-disk-controllers-for-zfs.html > > Joe -- > > What about the LSISAS3081E-R? Does it use the same drivers as the > other LSI controllers? > > http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3081er/index.htmlThat card is very similar to ones sold by Sun. It should work fine out of the box with the mpt(7D) driver. James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
If your worried about the bandwidth limitations of putting something like the supermicro card in a pci slot how about using an active riser card to convert from PCI-E to PCI-X. One of these, or something similar: http://www.tyan.com/product_accessories_spec.aspx?pid=26 on sale at http://www.amazon.com/dp/B000OH5J9G?smid=ATVPDKIKX0DER&tag=nextag-ce-tier2-20&linkCode=asn I''m sure you can find something similar for less, and I have seen ones that go from PCI-E x16 to several PCI-X as well. That and the supermicro are under half the price of the cheapest LSI PCI-E card. Lee This message posted from opensolaris.org
On Wed, Jun 11, 2008 at 10:18 AM, Lee <lfreyberg at gmail.com> wrote:> If your worried about the bandwidth limitations of putting something like > the supermicro card in a pci slot how about using an active riser card to > convert from PCI-E to PCI-X. One of these, or something similar: > > http://www.tyan.com/product_accessories_spec.aspx?pid=26 > > on sale at > > > http://www.amazon.com/dp/B000OH5J9G?smid=ATVPDKIKX0DER&tag=nextag-ce-tier2-20&linkCode=asn > > I''m sure you can find something similar for less, and I have seen ones that > go from PCI-E x16 to several PCI-X as well. That and the supermicro are > under half the price of the cheapest LSI PCI-E card. > > Lee > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >Are those universal though? I was under the impression it had to be supported by the motherboard, or you''d fry all components involved. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080611/050f4e42/attachment.html>
I don''t think so, not all of them anyway. They also sell ones that have a proprietary goldfinger, which obviously would not work. The spec does not mention any specific restrictions, just lists the interface types (but it is fairly breif), and you can certianly buy PCI - PCI-E generic adapters: http://virtuavia.eu/shop/pci-express-to-pci-adapter-p29855.html Which use a similar bridge chip. This message posted from opensolaris.org
On Wed, Jun 11, 2008 at 8:21 AM, Tim <tim at tcsac.net> wrote:> Are those universal though? I was under the impression it had to be > supported by the motherboard, or you''d fry all components involved.There are PCI/PCI-X to PCI-e bridge chips available (as well as PCI-e to AGP) and they''re part of the spec. As to how well they actually work on a separate riser card, I''m not sure. I like the idea though. This board looks decent if you need a ton of drives. The second x16 slot is actually a x4 electrical, but that not too shabby for a $100 mobo. http://www.newegg.com/Product/Product.aspx?Item=N82E16813128335 -B -- Brandon High bhigh at freaks.com "The good is the enemy of the best." - Nietzsche