Greetings all I was looking at creating a little ZFS storage box at home using the following SATA controllers (Adaptec Serial ATA II RAID 1420SA) on Opensolaris X86 build Just wanted to know if anyone out there is using these and can vouch for them. If not if there''s something else you can recommend or suggest. Disk''s would be 6*Seagate 500GB drives. Thanks David This message posted from opensolaris.org
I''ve had great luck with my Supermicro AOC-SAT2-MV8 card so far. I''m using it in an old PCI slot, so it''s probably not as fast as it could be, but it worked great right out of the box. -Aaron On Fri, May 23, 2008 at 12:09 AM, David Francis <dfrancis at amnet.net.au> wrote:> Greetings all > > I was looking at creating a little ZFS storage box at home using the following SATA controllers (Adaptec Serial ATA II RAID 1420SA) on Opensolaris X86 build > > Just wanted to know if anyone out there is using these and can vouch for them. If not if there''s something else you can recommend or suggest. > > Disk''s would be 6*Seagate 500GB drives. > > Thanks > > David > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
David Francis wrote:> Greetings all > > I was looking at creating a little ZFS storage box at home using the following SATA controllers (Adaptec Serial ATA II RAID 1420SA) on Opensolaris X86 build > > Just wanted to know if anyone out there is using these and can vouch for them. If not if there''s something else you can recommend or suggest. > > Disk''s would be 6*Seagate 500GB drives. > >6 or more SATA slots are quite common on current motherboards, so if you shop around, you may not need an add on card. Ian
That 1420SA will not work, period. Type "1420sa solaris" in Google and you''ll find a thread about the problems I had with it. I sold it and took the cheap route again with a Silicon Image 3124-based adapter and had more problems which now probably would be solved with the latest Solaris updates. Anyway, I finally settled for a motherboard with an Intel ICH9-R and couldn''t be happier (Intel DG33TL/DG33TLM, 6 SATA ports). No hassles and very speedy. That Supermicro card someone else is recommending should also work without any issues, and it''s really cheap for what you get (8 ports). Your maximum throuhput won''t exceed 100MB/s though if you can''t plug it in a PCI-X slot but resort to a regular PCI slot instead. Greetings, Pascal This message posted from opensolaris.org
On Fri, May 23, 2008 at 12:47:18AM -0700, Pascal Vandeputte wrote:> > I sold it and took the cheap route again with a Silicon Image 3124-based adapter and had more problems which now probably would be solved with the latest Solaris updates.I''m running a 3124 with snv81 and haven''t had a single problem with it. Whatever problems you ran into have likely been resolved. Just my $0.02. ;) -brian -- "Coding in C is like sending a 3 year old to do groceries. You gotta tell them exactly what you want or you''ll end up with a cupboard full of pop tarts and pancake mix." -- IRC User (http://www.bash.org/?841435)
Brian Hechinger wrote:> On Fri, May 23, 2008 at 12:47:18AM -0700, Pascal Vandeputte wrote: > >> I sold it and took the cheap route again with a Silicon Image 3124-based adapter and had more problems which now probably would be solved with the latest Solaris updates. >> > > I''m running a 3124 with snv81 and haven''t had a single problem with it. > Whatever problems you ran into have likely been resolved. > > Just my $0.02. ;) > > -brian >The Silicon Image 3114 also works like a champ, but it''s SATA 1.0 only. It''s dirt cheap (under $25), and you will probably need to re-flash the BIOS with one from Silicon Image''s web site to remove the RAID software (Solaris doesn''t understand it), but I''ve had nothing but success with this card (the re-flash is simple). On a related note - does anyone know of a good Solaris-supported 4+ port SATA card for PCI-Express? Preferably 1x or 4x slots... -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
On Fri, May 23, 2008 at 12:25:34PM -0700, Erik Trimble wrote:> > > >I''m running a 3124 with snv81 and haven''t had a single problem with it. > >Whatever problems you ran into have likely been resolved. > > > The Silicon Image 3114 also works like a champ, but it''s SATA 1.0 only. > It''s dirt cheap (under $25), and you will probably need to re-flash the > BIOS with one from Silicon Image''s web site to remove the RAID software > (Solaris doesn''t understand it), but I''ve had nothing but success with > this card (the re-flash is simple).With the 3124 you don''t even need to do the flash game, the 3124 is comletely supported.> On a related note - does anyone know of a good Solaris-supported 4+ port > SATA card for PCI-Express? Preferably 1x or 4x slots...The Silicon Image 3134 is supported by Solaris. -brian -- "Coding in C is like sending a 3 year old to do groceries. You gotta tell them exactly what you want or you''ll end up with a cupboard full of pop tarts and pancake mix." -- IRC User (http://www.bash.org/?841435)
On Fri, May 23, 2008 at 2:36 PM, Brian Hechinger <wonko at 4amlunch.net> wrote:> On Fri, May 23, 2008 at 12:25:34PM -0700, Erik Trimble wrote: > > > > > >I''m running a 3124 with snv81 and haven''t had a single problem with it. > > >Whatever problems you ran into have likely been resolved. > > > > > The Silicon Image 3114 also works like a champ, but it''s SATA 1.0 only. > > It''s dirt cheap (under $25), and you will probably need to re-flash the > > BIOS with one from Silicon Image''s web site to remove the RAID software > > (Solaris doesn''t understand it), but I''ve had nothing but success with > > this card (the re-flash is simple). > > With the 3124 you don''t even need to do the flash game, the 3124 is > comletely > supported. > > > On a related note - does anyone know of a good Solaris-supported 4+ port > > SATA card for PCI-Express? Preferably 1x or 4x slots... > > The Silicon Image 3134 is supported by Solaris. >I''m looking on their site and don''t even see any data on the 3134... this *something new* that hasn''t been released or? The only thing I see is 3132.> > -brian > -- > "Coding in C is like sending a 3 year old to do groceries. You gotta > tell them exactly what you want or you''ll end up with a cupboard full of > pop tarts and pancake mix." -- IRC User (http://www.bash.org/?841435) > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080523/d6571aa6/attachment.html>
On Fri, May 23, 2008 at 12:43 PM, Tim <tim at tcsac.net> wrote:> I''m looking on their site and don''t even see any data on the 3134... this > *something new* that hasn''t been released or? The only thing I see is 3132.There isn''t a 3134, but there is a 3124, which is a PCI-X based 4-port. -B -- Brandon High bhigh at freaks.com "The good is the enemy of the best." - Nietzsche
On Fri, May 23, 2008 at 3:15 PM, Brandon High <bhigh at freaks.com> wrote:> On Fri, May 23, 2008 at 12:43 PM, Tim <tim at tcsac.net> wrote: > > I''m looking on their site and don''t even see any data on the 3134... this > > *something new* that hasn''t been released or? The only thing I see is > 3132. > > There isn''t a 3134, but there is a 3124, which is a PCI-X based 4-port. > > -B > > -- > Brandon High bhigh at freaks.com > "The good is the enemy of the best." - NietzscheSo we''re still stuck the same place we were a year ago. No high port count pci-E compatible non-raid sata cards. You''d think with all the demand SOMEONE would''ve stepped up to the plate by now. Marvell, cmon ;) --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080523/8a04a515/attachment.html>
Tim <tim <at> tcsac.net> writes:> > So we''re still stuck the same place we were a year ago.? No high port > count pci-E compatible non-raid sata cards.? You''d think with all the > demand SOMEONE would''ve stepped up to the plate by now.? Marvell, cmon ;)Here is a 6-port SATA PCI-Express x1 controller for $70: [1]. I don''t know who makes this card, but from the picture it is apparently based on a SiI3114 chip behind a PCI-E to PCI bridge. I also don''t know how they get 6 ports total when this chip is known to only provide 4 ports. Downsides: SATA 1.5 Gbps only; 4 of the ports are external (eSATA cables required); and don''t expect to break throughput records because the bottleneck will be the internal PCI bus (33 MHz or 66 MHz: 133 or 266 MB/s theoretical hence 100 or 200 MB/s practical peak throughput shared between the 6 drives). I also know Lycom, who is selling a 4-port PCI-E x8 card based on the Silicon Image SiI3124 chip and a PCI-E to PCI-X bridge [2]. I am unable to find a vendor for this card though. I heard about Lycom through the vendor list on sata-io.org. Regarding Marvell, their website is completely useless as they provide almost no tech info regarding their SATA products, but according to a wikipedia article [3] they have three PCI-E to SATA 3.0 Gbps host controllers: o 88SE6141: 4-port (AHCI ?) o 88SE6145: 4-port (AHCI according to the Linux driver source code) o 88SX7042: 4-port (non-AHCI) The 6141 and 6145 appear to be mostly used as onboard SATA controllers according to [3]. The 7042 can be found on some Adaptec and Highpoint cards according to [4], but they are probably expensive and come with this thing called "hardware RAID" that most of us don''t need :) Overall, like you I am frustrated by the lack of non-RAID inexpensive native PCI-E SATA controllers. -marc [1] http://cooldrives.com/ss42chesrapc.html [2] http://www.lycom.com.tw/PE124R5.htm [3] http://en.wikipedia.org/wiki/List_of_Marvell_Technology_Group_chipsets [4] http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=drivers/ata/sata_mv.c;hb=HEAD
Marc Bevand wrote:> > Overall, like you I am frustrated by the lack of non-RAID inexpensive native > PCI-E SATA controllers. > > >Why non-raid? Is it cost? Personally I''m interested in a high port count RAID card, with as much battery-backed cache RAM as possible, and that can export as many LUNS as it can handle drives. I want a card like that so that I can give ZFS as many single drive RAID 0 luns that have battery-backed write caches as possible. I know it won''t be cheap, but it should perform really good. In my current machines (IBM x346''s), I''m stuck with U320 scsi internal for now, but I have the 256MB internal battery backed ''7k'' RAID card, and I''ve made 5 1 disk RAID0 LUNs to get the benefits of the write cache on the card but still let ZFS have the benefits of a many disk JBOD. I''m on the look out for SATA RAID card with 1-4GB of battery-backed Cache to redo this config with 10-24 SATA drives. I''ll probably end up with multiple cards, since 1GB caches seem to be the most I''ve found. -Kyle> [1] http://cooldrives.com/ss42chesrapc.html > [2] http://www.lycom.com.tw/PE124R5.htm > [3] http://en.wikipedia.org/wiki/List_of_Marvell_Technology_Group_chipsets > [4] > http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=drivers/ata/sata_mv.c;hb=HEAD > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Kyle McDonald <KMcDonald <at> Egenera.COM> writes:> Marc Bevand wrote: > > > > Overall, like you I am frustrated by the lack of non-RAID inexpensive > > native PCI-E SATA controllers. > > Why non-raid? Is it cost?Primarily cost, reliability (less complex hw = less hw that can fail), and serviceability (no need to rebuy the exact same raid card model when it fails, any SATA controller will do). If you want good write performance, instead of adding N GB of cache memory to a disk controller, add N*5 or N*10 GB of system memory (DDR2 is maybe 1/5th or 1/10th cheaper per GB, and the OS already uses main memory to cache disk writes). -marc
On Sun, 25 May 2008, Marc Bevand wrote:> > Primarily cost, reliability (less complex hw = less hw that can fail), > and serviceability (no need to rebuy the exact same raid card model > when it fails, any SATA controller will do).As long as the RAID is self-contained on the card, and the disks are exported as JBOD, then you should be able to replace the card with any adaptor supporting at least as many ports.> If you want good write performance, instead of adding N GB of cache memory > to a disk controller, add N*5 or N*10 GB of system memory (DDR2 is maybe > 1/5th or 1/10th cheaper per GB, and the OS already uses main memory to > cache disk writes).Something tells me that Kyle may know what he is talking about. More system RAM does not help synchronous writes go much faster. It does help with asynchronous writes, but only for intermittent or relatively slow write loads. What makes the synchronous writes go faster is for the data to be queued as fast as possible to non-volatile media (e.g. NV write cache) so that the ZFS write operation can return right away. ZFS loads up the drives according to the current amount of I/O wait for the device. If the device accepts data faster, then ZFS returns faster, and the client application (e.g. NFS or database) can run again. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
> More system RAM does not help synchronous writes go much faster.agreed, but it does make sure all the asynchronous writes are batched and the tgx group isn''t committed early keeping everything synchronous. (default batch is every 5 sec) > If you want good write performance, instead of adding N GB of cache memory > to a disk controller, add N*5 or N*10 GB of system memory (DDR2 is maybe it kinda comes down to number of spindles in the end, even if you have a huge log device, or huge system ZIL, it needs to get to the disks (synchronous) in the end. the rules don''t change with zfs, the system with the most vdevs wins :-) Rob
Marc Bevand wrote:> Kyle McDonald <KMcDonald <at> Egenera.COM> writes: > >> Marc Bevand wrote: >> >>> Overall, like you I am frustrated by the lack of non-RAID inexpensive >>> native PCI-E SATA controllers. >>> >> Why non-raid? Is it cost? >> > > Primarily cost, reliability (less complex hw = less hw that can fail), > and serviceability (no need to rebuy the exact same raid card model > when it fails, any SATA controller will do). > > If you want good write performance, instead of adding N GB of cache memory > to a disk controller, add N*5 or N*10 GB of system memory (DDR2 is maybe > 1/5th or 1/10th cheaper per GB, and the OS already uses main memory to > cache disk writes). > >I''ve already maxed the machines out with 16GB. The RAID cache seemed the next step, and still cheaper than a SSD ZIL device. Though that I think would be the next step. Since NFS is the primary way I intend to use this, the the battery backed RAM allows the sync requests to return much sooner than straight JBOD would. At least that''s my understanding. -Kyle> -marc > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Which of these SATA controllers have people been able to use with SMART and ZFS boot in Solaris? Cheers, 11011011
| > Primarily cost, reliability (less complex hw = less hw that can | > fail), and serviceability (no need to rebuy the exact same raid card | > model when it fails, any SATA controller will do). | | As long as the RAID is self-contained on the card, and the disks are | exported as JBOD, then you should be able to replace the card with any | adaptor supporting at least as many ports. I believe it''s common for PC-level hardware RAID cards to save the RAID configuration on the disks themselves, which takes a bit of space and (if it''s done at the start of the disk) may make the disk unrecognizable by standard tools, even with a JBOD setting. The vendors presumably do this, among other reasons, so that replacing a dead controller doesn''t require your operating system and so on to be running in order to upload a saved configuration or the like. - cks
Kyle McDonald <KMcDonald <at> Egenera.COM> writes:> I''ve already maxed the machines out with 16GB. The RAID cache seemed the > next step, and still cheaper than a SSD ZIL device. Though that I think > would be the next step. > > Since NFS is the primary way I intend to use this, the the battery > backed RAM allows the sync requests to return much sooner than straight > JBOD would. At least that''s my understanding.Yes the cache RAM will help synchronous writes (as long as it never fills up). I am biased against cache RAM because none of my workloads depend on short latency of synchronous writes. -marc
I''m using a gigabyte I-RAM card with cheap memory for my slog device with great results. Of course I don''t have as much memory as you do in my project box. I also want to use the left over space on the I-ram and dual purpose it for a readzilla cache device and slog. Picked it up off ebay along with some computer guts and an nsc-314s 3u 14 hot swap drive rackmount case. Like everyone else, I have been spending hours trying to find a supported high capacity SATA card that supports PCI-Express. I wish someone would make a driver for that adaptec card mentioned in this thread. it is very reasonably priced for a project box. Everything else that fits that category and is supported seems to be $320 or much more. This message posted from opensolaris.org
Erik Trimble wrote:> On a related note - does anyone know of a good Solaris-supported 4+ port > SATA card for PCI-Express? Preferably 1x or 4x slots...From what I can tell, all the vendors are only making SAS controllers for PCIe with more than 4 ports. Since SAS supports SATA, I guess they don''t see much point in doing SATA-only controllers. For example, the LSI SAS3081E-R is $260 for 8 SAS ports on 8x PCIe, which is somewhat more expensive than the almost equivalent PCI-X LSI SAS3080X-R which is as low as $180. For those downthread looking for full RAID controllers with battery backup RAM, Areca (who formerly specialised in SATA controlers) now do SAS RAID at reasonable prices, and have Solaris drivers. -- James Andrewartha
On May 28, 2008, at 05:11, James Andrewartha wrote:> From what I can tell, all the vendors are only making SAS > controllers for > PCIe with more than 4 ports. Since SAS supports SATA, I guess they > don''t see > much point in doing SATA-only controllers. > > For example, the LSI SAS3081E-R is $260 for 8 SAS ports on 8x PCIe, > which is > somewhat more expensive than the almost equivalent PCI-X LSI > SAS3080X-R > which is as low as $180.That''s not a huge price difference when building a server - thanks for the pointer. Are there any ''gotchas'' the list can offer when using a SAS card with SATA drives? I''ve been told that SATA drives can have a lower MTBF than SAS drives (by a guy working QA for BigDriveCo), but ZFS helps keep the I in RAID.> For those downthread looking for full RAID controllers with battery > backup > RAM, Areca (who formerly specialised in SATA controlers) now do SAS > RAID at > reasonable prices, and have Solaris drivers.I''ve seen posts about misery with the sil and marvell drivers from about a year ago; is there a good way to pound an opensolaris driver to find its holes, in a ZFS context? On one hand I''d guess it shouldn''t be too hard to simulate different kinds of loads, but on the other hand, if that were easy, the drivers'' authors would have done that before unleashing buggy code on the masses. Thanks, -Bill ----- Bill McGonigle, Owner Work: 603.448.4440 BFC Computing, LLC Home: 603.448.1668 bill at bfccomputing.com Cell: 603.252.2606 http://www.bfccomputing.com/ Page: 603.442.1833 Blog: http://blog.bfccomputing.com/ VCard: http://bfccomputing.com/vcard/bill.vcf
Bill McGonigle wrote:> On May 28, 2008, at 05:11, James Andrewartha wrote: > > >> From what I can tell, all the vendors are only making SAS >> controllers for >> PCIe with more than 4 ports. Since SAS supports SATA, I guess they >> don''t see >> much point in doing SATA-only controllers. >> >> For example, the LSI SAS3081E-R is $260 for 8 SAS ports on 8x PCIe, >> which is >> somewhat more expensive than the almost equivalent PCI-X LSI >> SAS3080X-R >> which is as low as $180. >> > > That''s not a huge price difference when building a server - thanks > for the pointer. Are there any ''gotchas'' the list can offer when > using a SAS card with SATA drives? I''ve been told that SATA drives > can have a lower MTBF than SAS drives (by a guy working QA for > BigDriveCo), but ZFS helps keep the I in RAID. >There are BigDriveCos which sell enterprise-class SATA drives. Since the mechanics are the same, the difference is in the electronics and software. Vote with your pocketbook for the enterprise-class products. -- richard
On May 28, 2008, at 10:27 AM 5/28/, Richard Elling wrote:> Since the mechanics are the same, the difference is in the electronicsIn my very distant past, I did QA work for an electronic component manufacturer. Even parts which were "identical" were expected to behave quite differently ... based on population statistics. That is, the HighRel MilSpec parts were from batches with no failures (even under very harsh conditions beyond the normal operating mode, and all tests to destruction showed only the expected failure modes) and the "hobbyist grade" components were those whose cohort *failed* all the testing (and destructive testing could highlight abnormal failure modes). I don''t know that drive builders do the same thing, but I''d kinda expect it. -- Keith H. Bierman khbkhb at gmail.com | AIM kbiermank 5430 Nassau Circle East | Cherry Hills Village, CO 80113 | 303-997-2749 <speaking for myself*> Copyright 2008
On Wed, 2008-05-28 at 10:34 -0600, Keith Bierman wrote:> On May 28, 2008, at 10:27 AM 5/28/, Richard Elling wrote: > > > Since the mechanics are the same, the difference is in the electronics > > > In my very distant past, I did QA work for an electronic component > manufacturer. Even parts which were "identical" were expected to > behave quite differently ... based on population statistics. That is, > the HighRel MilSpec parts were from batches with no failures (even > under very harsh conditions beyond the normal operating mode, and all > tests to destruction showed only the expected failure modes) and the > "hobbyist grade" components were those whose cohort *failed* all the > testing (and destructive testing could highlight abnormal failure > modes). > > I don''t know that drive builders do the same thing, but I''d kinda > expect it.Seagate''s ES.2 has a higher MBTF than the equivalent consumer drive, so you''re probably right. Western Digital''s RE2 series (which my work uses) comes with a 5 year warranty, compared to 3 years for the consumer versions. The RE2 also have firmware with Time-Limited Error Recovery, which reports errors promptly, letting the higher-level RAID do data recovery. Both have improved vibration tolerance through firmware tweaks. And if you want 10krpm, I think WD''s VelociRaptor counts. http://www.techreport.com/articles.x/13732 http://www.techreport.com/articles.x/13253 http://www.techreport.com/articles.x/14583 http://www.storagereview.com/ is promising some SSD benchmarks soon. James Andrewartha
On Wed, May 28, 2008 at 9:27 AM, Richard Elling <Richard.Elling at sun.com> wrote:> There are BigDriveCos which sell enterprise-class SATA drives. > Since the mechanics are the same, the difference is in the electronics > and software. Vote with your pocketbook for the enterprise-class > products.CMU released a study comparing the MTBF enterprise class drive with consumer drives, and found no real differences.>From the study:"In our data sets, the replacement rates of SATA disks are not worse than the replacement rates of SCSI or FC disks. This may indicate that disk-independent factors, such as operating conditions, usage and environmental factors affect replacement rates more than component specific factors." Google has also released a similar study on drive reliability. Google''s sample size is considerably larger than CMU''s as well. There''s a blurb here: http://news.bbc.co.uk/2/hi/technology/6376021.stm Full results here: http://research.google.com/archive/disk_failures.pdf -B -- Brandon High bhigh at freaks.com "The good is the enemy of the best." - Nietzsche
On Wed, 28 May 2008, Brandon High wrote:> CMU released a study comparing the MTBF enterprise class drive with > consumer drives, and found no real differences.That should really not be a surprise. Chips are chips and in the economies of scale, as few chips will be be used as possible. The quality of manufacture could vary, but this is likely more dependent on the manufacturer than the product line. Manufacturers who produce crummy products don''t last very long. True enterprise drives (SCSA, SAS, FC) have much lower media read error rates by an factor of 10 and more tolerance to vibration and temperature. They also have much lower storage capacity and much better seek and I/O performance. Failure to read a block is not a failure of the drive so this won''t be considered by any study which only considers drive replacement. SATA "enterprise" drives seem more like a gimmick than anything else. Perhaps the warranty is longer and they include a tiny bit more smarts in the firmware. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
http://blogs.sun.com/relling/entry/adaptec_webinar_on_disks_and -- richard Bob Friesenhahn wrote:> On Wed, 28 May 2008, Brandon High wrote: > >> CMU released a study comparing the MTBF enterprise class drive with >> consumer drives, and found no real differences. >> > > That should really not be a surprise. Chips are chips and in the > economies of scale, as few chips will be be used as possible. The > quality of manufacture could vary, but this is likely more dependent > on the manufacturer than the product line. Manufacturers who produce > crummy products don''t last very long. > > True enterprise drives (SCSA, SAS, FC) have much lower media read > error rates by an factor of 10 and more tolerance to vibration and > temperature. They also have much lower storage capacity and much > better seek and I/O performance. Failure to read a block is not a > failure of the drive so this won''t be considered by any study which > only considers drive replacement. > > SATA "enterprise" drives seem more like a gimmick than anything else. > Perhaps the warranty is longer and they include a tiny bit more smarts > in the firmware. > > > Bob > =====================================> Bob Friesenhahn > bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On Wed, May 28, 2008 at 12:01:36PM -0400, Bill McGonigle wrote:> On May 28, 2008, at 05:11, James Andrewartha wrote: > > That''s not a huge price difference when building a server - thanks > for the pointer. Are there any ''gotchas'' the list can offer when > using a SAS card with SATA drives? I''ve been told that SATA drives > can have a lower MTBF than SAS drives (by a guy working QA for > BigDriveCo), but ZFS helps keep the I in RAID.I''m running 3 (used to be 4, but I repurposed that drive) 500GB Seagate SATA disks on an LSI SAS3080X in a RAIDZ1 pool in my Ultra80 and it''s been working great. The only ''gothca'' that I can think of is the loss of the ability to run more than one drive per channel, but I guess I can live with that. :) I got my SAS3080X for, uhm, let''s see, including shipping and the SAS to 4 cable SATA breakout cable, it was less than $100 off of ebay, probably closer to $80. I don''t know prices on the PCIe version of those cards on ebay though. Probably more expensive as everyone wants PCIe these days. -brian -- "Coding in C is like sending a 3 year old to do groceries. You gotta tell them exactly what you want or you''ll end up with a cupboard full of pop tarts and pancake mix." -- IRC User (http://www.bash.org/?841435)
I''ve had a RAIDZ/ZFS File Server since Update 2, so I thought I''d share my setup. Opteron FX-51 (2.3Ghz, Socket 939) Asus SK8N 4x 512MB EBB Unbuffered DDR1 Memory 2x Skymaster PCI-X 4 Port SATA (based on SI3114 Chipset). currently deployed over 2x PCI ports on the motherboard. 1x Intel 10/100 NIC PCI. 8x 320GB Western Digital SATA Drives As you can see, I''m sharing the PCI bus for both controllers and my NIC. So the speed isn''t very fast (10MB/s) from a Windows XP client through Samba. I''m considering changing motherboard to Asus K8N-LR, which will allow me to use both PCI-X slots, and a dedicated Intel Gigabit NIC in the PCIe slot. Which will dramatically speed things up. This message posted from opensolaris.org
Im using the AOC card with 8 SATA-2 ports too. It got detected automatically during Solaris install. Works great. And it is cheap. Ive heard that it is the same chipset as used in X4500 thumper with 48 drives? In a PCI, the PCI bottle necks at ~150MB/sec, or so. In a PCI-X slot, you will reach something like 1.5GB/sec which should suffice for most needs. Maybe it is cheaper to buy that card + PCI-X motherboard (only found on server mobos) than buying a SAS or PCI-express, if you want to achieve speed? This message posted from opensolaris.org
On Fri, May 30, 2008 at 12:48 PM, Orvar Korvar <knatte_fnatte_tjatte at yahoo.com> wrote:> In a PCI-X slot, you will reach something like 1.5GB/sec which should suffice for most needs. Maybe it is cheaper to buy that card + PCI-X motherboard (only found on server mobos) than buying a SAS or PCI-express, if you want to achieve speed?That''s my thought as well. I''m going to be putting together a home NAS based on OpenSolaris using the following: 1 SUPERMICRO CSE-743T-645B Black Chassis 1 ASUS M2N-LR AM2 NVIDIA nForce Professional 3600 ATX Server Motherboard 1 SUPERMICRO AOC-SAT2-MV8 64-bit PCI-X133MHz SATA Controller Card 1 AMD Athlon X2 4850e 2.5GHz Socket AM2 45W Dual-Core Processor Model ADH4850DOBOX 1 Crucial 4GB (2 x 2GB) 240-Pin DDR2 SDRAM DDR2 667 (PC2 5300) ECC Unbuffered Dual Channel Kit Server Memory Model CT2KIT25672AA667 8 Western Digital Caviar GP WD10EACS 1TB 5400 to 7200 RPM SATA 3.0Gb/s Hard Drive Subtotal: $2,386.88 I may get another drive for the OS as well, or boot off of a CF-card/IDE adapter like this one: http://www.newegg.com/Product/Product.aspx?Item=N82E16812186038 -B -- Brandon High bhigh at freaks.com "The good is the enemy of the best." - Nietzsche
Brandon High wrote:> On Fri, May 30, 2008 at 12:48 PM, Orvar Korvar > <knatte_fnatte_tjatte at yahoo.com> wrote: > >> In a PCI-X slot, you will reach something like 1.5GB/sec which should suffice for most needs. Maybe it is cheaper to buy that card + PCI-X motherboard (only found on server mobos) than buying a SAS or PCI-express, if you want to achieve speed? >> > > That''s my thought as well. I''m going to be putting together a home NAS > based on OpenSolaris using the following: > 1 SUPERMICRO CSE-743T-645B Black Chassis > 1 ASUS M2N-LR AM2 NVIDIA nForce Professional 3600 ATX Server Motherboard > 1 SUPERMICRO AOC-SAT2-MV8 64-bit PCI-X133MHz SATA Controller Card > 1 AMD Athlon X2 4850e 2.5GHz Socket AM2 45W Dual-Core Processor Model > ADH4850DOBOX > 1 Crucial 4GB (2 x 2GB) 240-Pin DDR2 SDRAM DDR2 667 (PC2 5300) ECC > Unbuffered Dual Channel Kit Server Memory Model CT2KIT25672AA667 > 8 Western Digital Caviar GP WD10EACS 1TB 5400 to 7200 RPM SATA > 3.0Gb/s Hard Drive > > Subtotal: $2,386.88 > > I may get another drive for the OS as well, or boot off of a > CF-card/IDE adapter like this one: > http://www.newegg.com/Product/Product.aspx?Item=N82E16812186038 > > -B >One thought on this: for a small server, which is unlikely to ever be CPU bound, I would suggest looking for an older dual-Socket 940 Opteron motherboard. They almost all have many PCI-X slots, and single-core Opterons are dirt cheap. PC3200 DDR1 ECC ram is also cheap too. Tyan/Supermicro dual-Socket 940 motherboard: < $200 Opteron 252 (2.4Ghz) + heatsink < $75 Opteron 280 (dual-core 2.4Ghz) + heatsink < $180 4x1GB ECC DDR1 PC3200 ram < $150 The only drawback of the older Socket 940 Opterons is that they don''t support the hardware VT extensions, so running a Windows guest under xVM on them isn''t currently possible. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
On Fri, May 30, 2008 at 5:59 PM, Erik Trimble <Erik.Trimble at sun.com> wrote:> One thought on this: for a small server, which is unlikely to ever be CPU > bound, I would suggest looking for an older dual-Socket 940 Opteron > motherboard. They almost all have many PCI-X slots, and single-core > Opterons are dirt cheap. PC3200 DDR1 ECC ram is also cheap too. > > Tyan/Supermicro dual-Socket 940 motherboard: < $200 > Opteron 252 (2.4Ghz) + heatsink < $75 > Opteron 280 (dual-core 2.4Ghz) + heatsink < $180 > 4x1GB ECC DDR1 PC3200 ram < $150I''ve actually done the math, and while your statement may have been true a few months ago, it doesn''t appear to be the case anymore. Right now Newegg shows the Opteron 285 for $389, or the 880 for $694. The cheapest ECC DDR RAM is $40 per GB. The least expensive Socket 940 board with a PCI-X slot is the TYAN S2881UG2NR at $419. Call it $960 (with a single 285 cpu) vs. $399 for the AM2 pieces. I''d check prices on a single socket 939 Opteron with a suitable motherboard, but neither appear to be available anymore. -B -- Brandon High bhigh at freaks.com "The good is the enemy of the best." - Nietzsche
USED hardware is your friend :) He wasn''t quoting new prices. On Fri, May 30, 2008 at 8:54 PM, Brandon High <bhigh at freaks.com> wrote:> On Fri, May 30, 2008 at 5:59 PM, Erik Trimble <Erik.Trimble at sun.com> > wrote: > > One thought on this: for a small server, which is unlikely to ever be > CPU > > bound, I would suggest looking for an older dual-Socket 940 Opteron > > motherboard. They almost all have many PCI-X slots, and single-core > > Opterons are dirt cheap. PC3200 DDR1 ECC ram is also cheap too. > > > > Tyan/Supermicro dual-Socket 940 motherboard: < $200 > > Opteron 252 (2.4Ghz) + heatsink < $75 > > Opteron 280 (dual-core 2.4Ghz) + heatsink < $180 > > 4x1GB ECC DDR1 PC3200 ram < $150 > > I''ve actually done the math, and while your statement may have been > true a few months ago, it doesn''t appear to be the case anymore. Right > now Newegg shows the Opteron 285 for $389, or the 880 for $694. > > The cheapest ECC DDR RAM is $40 per GB. > > The least expensive Socket 940 board with a PCI-X slot is the TYAN > S2881UG2NR at $419. > > Call it $960 (with a single 285 cpu) vs. $399 for the AM2 pieces. > > I''d check prices on a single socket 939 Opteron with a suitable > motherboard, but neither appear to be available anymore. > > -B > > -- > Brandon High bhigh at freaks.com > "The good is the enemy of the best." - Nietzsche > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080530/65ae9ef3/attachment.html>
On Fri, May 30, 2008 at 6:57 PM, Tim <tim at tcsac.net> wrote:> USED hardware is your friend :) He wasn''t quoting new prices.Not really an apples-to-apples comparison then, is it? Cruising eBay for parts isn''t my idea of reproducible or supportable. Sure, an older server could possibly fall into my car''s trunk as I leave work one day, but that''s not something I''d consider either. -B -- Brandon High bhigh at freaks.com "The good is the enemy of the best." - Nietzsche
Brandon High wrote:> On Fri, May 30, 2008 at 5:59 PM, Erik Trimble <Erik.Trimble at sun.com> wrote: > >> One thought on this: for a small server, which is unlikely to ever be CPU >> bound, I would suggest looking for an older dual-Socket 940 Opteron >> motherboard. They almost all have many PCI-X slots, and single-core >> Opterons are dirt cheap. PC3200 DDR1 ECC ram is also cheap too. >> >> Tyan/Supermicro dual-Socket 940 motherboard: < $200 >> Opteron 252 (2.4Ghz) + heatsink < $75 >> Opteron 280 (dual-core 2.4Ghz) + heatsink < $180 >> 4x1GB ECC DDR1 PC3200 ram < $150 >> > > I''ve actually done the math, and while your statement may have been > true a few months ago, it doesn''t appear to be the case anymore. Right > now Newegg shows the Opteron 285 for $389, or the 880 for $694. > > The cheapest ECC DDR RAM is $40 per GB. > > The least expensive Socket 940 board with a PCI-X slot is the TYAN > S2881UG2NR at $419. > > Call it $960 (with a single 285 cpu) vs. $399 for the AM2 pieces. > > I''d check prices on a single socket 939 Opteron with a suitable > motherboard, but neither appear to be available anymore. > > -B >Prices I quoted are for used, or excess stock from eBay. There''s still a lot of "reman" stuff out there which is new, and very cheap. http://www.pricewatch.com/ is also a good place to look for not-quite-retail-packaged. A quick eBay jaunt yields (buy-it-now prices including shipping): $70 Opteron 270 (2.0Ghz) - currently the sweet spot for dual-core opterons $20 Opteron 252 (2.6ghz) - the sweet spot for single-core opterons $30 Opteron heatsink (you can probably do better at a local computer store) $160 4-pack 1GB Viking-branded PC3200 ECC DDR1 Dimms $200 Tyan Thunder K8WE motherboard $120 Supermicro H8DAE-B motherboard As an aside, there''s someone selling a Tyan Thunder K8S with two opteron 246 (1.8Ghz) and heat sinks, all for $150. Also, getting an Opteron motherboard tends to get you better extras, like more gigabit ethernet ports, built-in VGA, and either SCSI or lots of SATA ports. Not to mention usually at least twice as many PCI-X slots. But, your mileage may vary. It was just a suggestion. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
Brandon High wrote:> On Fri, May 30, 2008 at 6:57 PM, Tim <tim at tcsac.net> wrote: > >> USED hardware is your friend :) He wasn''t quoting new prices. >> > > Not really an apples-to-apples comparison then, is it? Cruising eBay > for parts isn''t my idea of reproducible or supportable. > > Sure, an older server could possibly fall into my car''s trunk as I > leave work one day, but that''s not something I''d consider either. > > -B > >That said, much of what is available on eBay and from computer "recyclers" really isn''t USED. It''s surplus inventory, overstock, etc. Much of it still in the OEM or retail box. Heck, I know a bunch of folks selling BRAND_NEW IBM e326m machines, complete with 1-year IBM factory warranty, still-in-the-box, for $400 (1 Opteron 280/1GB RAM/1U rackmount chassis). It takes a bit of hunting, but you''ll find that rummaging around the Internet can still get you new parts for obsolete machines, at cut-rate pricing. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
Brandon High <bhigh <at> freaks.com> writes:> > I''m going to be putting together a home NAS > based on OpenSolaris using the following: > 1 SUPERMICRO CSE-743T-645B Black Chassis > 1 ASUS M2N-LR AM2 NVIDIA nForce Professional 3600 ATX Server Motherboard > 1 SUPERMICRO AOC-SAT2-MV8 64-bit PCI-X133MHz SATA Controller Card > 1 AMD Athlon X2 4850e 2.5GHz Socket AM2 45W Dual-Core Processor Model > ADH4850DOBOX > 1 Crucial 4GB (2 x 2GB) 240-Pin DDR2 SDRAM DDR2 667 (PC2 5300) ECC > Unbuffered Dual Channel Kit Server Memory Model CT2KIT25672AA667 > 8 Western Digital Caviar GP WD10EACS 1TB 5400 to 7200 RPM SATA > 3.0Gb/s Hard Drive > > Subtotal: $2,386.88You could get a $200 cheaper, more power-efficient, and more performant config by buying a high SATA port-count desktop-class mobo instead of a server one + AOC-SAT2-MV8. For example the Abit AB9 Pro (about $80-90) comes with 10 SATA ports (9 internal + 1 internal): 6 from the ICH8R chipset (driver: ahci), 2 from a JMB363 chip (driver: ahci in snv_82 and above, see bug 6645543), and 2 from a SiI3132 chip (driver: si3124). All these drivers should be rock-solid. Performance-wise you should be able to max out your 8 disks'' max read/write throughput at the same time (but see http://opensolaris.org/jive/thread.jspa?threadID=54481 there is usually a bottleneck of 150 MB/s per PCI-E lane, this apply to the JMB363 and SiI3132). Downside: loss of upgradability by having onboard SATA controllers. No onboard video. And it''s an Intel mobo. Intel''s prices for low-power processors (below ~50W) are higher than AMD''s, especially for dual-core ones. But something only slightly more power-hungry than your 45W AMD is the Pentium E2220 (2.4GHz dual-core 65W). Most likely your NAS will spend 90+% of its time idle so there wouldn''t be a constant 20W power diff between the 2 configs. What I hate about mobos with no onboard video is that these days it is impossible to find cheap fanless video cards. So usually I just go headless. -marc
Marc Bevand <m.bevand <at> gmail.com> writes:> > What I hate about mobos with no onboard video is that these days it is > impossible to find cheap fanless video cards. So usually I just go headless.Didn''t finish my sentence: ...fanless and *power-efficient*. Most cards consume 20+W when idle. This alone is a half or a third of the idle power consumption of a small NAS. -marc
Timely discussion. I too am trying to build a stable yet inexpensive storage server for my home lab mostly for playing in the VM world as well as general data storage. I''ve considered several options ranging from the simple linux based NAS appliances to older EMC SANs. I finally decided to build an NFS/CIFS/iSCSI/(even FC target?) box going the opensolaris route with ZFS. What I''m trying to decide on is the appropriate hardware to build the storage server. I have: - A couple of Dell Pentium 4 boxes - A couple of old Ultra SPARC (ultra80 and ultra 10) - D1000 array (but alas with old 36G drives) Other options are that I build a whitebox or buy a new PowerEdge or Sun X2200 etc use some kind of DAS such as Dell MD1000 (?) and use this box as the one and the only system (i.e. storage for PCs and my VM host). Of course this will be an expensive option. Any recommendations on a decent setup for my purposes as well as a good SATA DAS? I haven''t build a PC for at least 4 years so I''m not up to date on the processors, mobos, controller cards etc. PS. Question for the gentleman who bought the external SATA disk array...how are you planning to connect it to the server? This message posted from opensolaris.org
SS wrote:> Timely discussion. I too am trying to build a stable yet inexpensive storage server for my home lab mostly for playing in the VM world as well as general data storage. I''ve considered several options ranging from the simple linux based NAS appliances to older EMC SANs. I finally decided to build an NFS/CIFS/iSCSI/(even FC target?) box going the opensolaris route with ZFS. What I''m trying to decide on is the appropriate hardware to build the storage server. I have: > > - A couple of Dell Pentium 4 boxes > - A couple of old Ultra SPARC (ultra80 and ultra 10) > - D1000 array (but alas with old 36G drives) > > Other options are that I build a whitebox or buy a new PowerEdge or Sun X2200 etc use some kind of DAS such as Dell MD1000 (?) and use this box as the one and the only system (i.e. storage for PCs and my VM host). Of course this will be an expensive option. > > Any recommendations on a decent setup for my purposes as well as a good SATA DAS? I haven''t build a PC for at least 4 years so I''m not up to date on the processors, mobos, controller cards etc. > > PS. Question for the gentleman who bought the external SATA disk array...how are you planning to connect it to the serverWell, a couple of things: (1) we need to know more about your expected performance and use requirements before making a real recommendation (2) You want a 64-bit CPU. So that probably rules out your P4 machines, unless they were extremely late-model P4s with the EM64T features. Given that file-serving alone is relatively low-CPU, you can get away with practically any 64-bit capable CPU made in the last 4 years. (3) As much as I love them, the ultra80 & ultra10 are boat anchors now. Way too slow, way too power hungry, and not really useful. (4) High capacity on a budget means SATA drives. You can use small SCSI drives for certain performance-sensitive applications and not get creamed in the pocketbook (the D1000 is kinda interesting for this), but you need some form of SATA to get the big GB/$ benefits. (5) external cases/enclosures are expensive, but nice. The bang-for-buck is in the "workgroup server" case, which (besides being a PC case) generally holds 8-10 drives for about $300 or so. There''s lots of not-quite-optimal-but-still-really-good solutions out there on the used/recycled market, so if you don''t need something perfect (or a warranty), the price is really nice. If the solution you really want is an external disk enclosure hooked to some sort of a driver/head machine, check out used/off-lease IBM or HP opteron workstations, which tend to go for $500 or so, loaded. Sun v20z and IBM e326m 1U rackmount servers are in the same price range. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
Bob said:> SATA "enterprise" drives seem more like a gimmick than anything else. > Perhaps the warranty is longer and they include a tiny bit more smarts > in the firmware.WD supply enterprise class SATA drives whose prevailing feature is a low TLER (RE series). This makes the drive report a failed block quickly, rather than trying to recover the blocks for minutes. In a consumer PC, the drive''s heroic attempts can save the day. In a raid setup, however, this means that a drive will lock a raided fs until it times out. I would rather have the drive report as failed and let the raid take care of recovering the data. Even if there were no further technical reasons, this feature alone is a great benefit for using these SATA drives in the enterprise justin -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3361 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080531/ade2b09a/attachment.bin>
On Sat, May 31, 2008 at 7:06 AM, Justin Vassallo <justin.vassallo at entropay.com> wrote:> WD supply enterprise class SATA drives whose prevailing feature is a low > TLER (RE series). This makes the drive report a failed block quickly, rather > than trying to recover the blocks for minutes. In a consumer PC, the drive''sThe same feature can be enabled on WD''s consumer SATA drives. Google for wdtler.zip. -B -- Brandon High bhigh at freaks.com "The good is the enemy of the best." - Nietzsche
On Fri, May 30, 2008 at 10:58 PM, Marc Bevand <m.bevand at gmail.com> wrote:> For example the Abit AB9 Pro (about $80-90) comes with 10 SATA ports (9 > internal + 1 internal): 6 from the ICH8R chipset (driver: ahci), 2 from a > JMB363 chip (driver: ahci in snv_82 and above, see bug 6645543), and 2 from a > SiI3132 chip (driver: si3124).I had hoped to get a system with on board ports, but hadn''t found one with more than 6. Thanks for the pointer! -B -- Brandon High bhigh at freaks.com "The good is the enemy of the best." - Nietzsche
Brandon High wrote:> On Fri, May 30, 2008 at 10:58 PM, Marc Bevand <m.bevand at gmail.com> wrote: > >> For example the Abit AB9 Pro (about $80-90) comes with 10 SATA ports (9 >> internal + 1 internal): 6 from the ICH8R chipset (driver: ahci), 2 from a >> JMB363 chip (driver: ahci in snv_82 and above, see bug 6645543), and 2 from a >> SiI3132 chip (driver: si3124). >> > > I had hoped to get a system with on board ports, but hadn''t found one > with more than 6. Thanks for the pointer! > >Look for motherboards using the NVidia 680[ai]. http://en.wikipedia.org/wiki/NForce_600 -- richard
On May 30, 2008, at 6:59 PM, Erik Trimble wrote:> The only drawback of the older Socket 940 Opterons is that they don''t > support the hardware VT extensions, so running a Windows guest > under xVM > on them isn''t currently possible.From the VirtualBox manual, page 11 ? No hardware virtualization required. VirtualBox does not require processor features built into newer hardware like VT-x (on Intel processors) or AMD-V (on AMD processors). As opposed to many other virtualization solutions, you can therefore use VirtualBox even on older hardware where these features are not present. In fact, VirtualBox?s sophisticated software techniques are typically faster than hardware virtualization, although it is still possible to enable hard- ware virtualization on a per-VM basis. Only for some exotic guest operating systems like OS/2, hardware virtualization is required. ---- I''ve been running windows under OpenSolaris on an aged 32-bit Dell. I''m morally certain it lacks the hardware support, and in any event, the VBOX configuration is set to avoid using the VT extensions anyway. Runs fine. Not the fastest box on the planet ... but it''s got limited DRAM. -- Keith H. Bierman khbkhb at gmail.com | AIM kbiermank 5430 Nassau Circle East | Cherry Hills Village, CO 80113 | 303-997-2749 <speaking for myself*> Copyright 2008
Keith Bierman wrote:> On May 30, 2008, at 6:59 PM, Erik Trimble wrote: > > >> The only drawback of the older Socket 940 Opterons is that they don''t >> support the hardware VT extensions, so running a Windows guest >> under xVM >> on them isn''t currently possible. >> > > > > From the VirtualBox manual, page 11 > > ? No hardware virtualization required. VirtualBox does not require > processor > features built into newer hardware like VT-x (on Intel processors) or > AMD-V > (on AMD processors). As opposed to many other virtualization > solutions, you > can therefore use VirtualBox even on older hardware where these > features are > not present. In fact, VirtualBox?s sophisticated software techniques > are typically > faster than hardware virtualization, although it is still possible to > enable hard- > ware virtualization on a per-VM basis. Only for some exotic guest > operating > systems like OS/2, hardware virtualization is required. > > > ---- > > I''ve been running windows under OpenSolaris on an aged 32-bit Dell. > I''m morally certain it lacks the hardware support, and in any event, > the VBOX configuration is set to avoid using the VT extensions anyway. > > Runs fine. Not the fastest box on the planet ... but it''s got limited > DRAM. > >That is correct. VirtualBox does _not_ require the VT extensions. I was referring to xVM, which I''m still taking as synonymous with the Xen-based system. xVM _does_ require the VT hardware extensions to run guest OSes in an unmodified form, which currently includes all flavors of Windows. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
On Jun 2, 2008, at 3:24 AM 6/2/, Erik Trimble wrote:> Keith Bierman wrote: >> On May 30, 2008, at 6:59 PM, Erik Trimble wrote: >> >> >>> The only drawback of the older Socket 940 Opterons is that they >>> don''t >>> support the hardware VT extensions, so running a Windows guest >>> under xVM >>> on them isn''t currently possible. >>> >> > > That is correct. VirtualBox does _not_ require the VT extensions. > I was referring to xVM, which I''m still taking as synonymous with > the Xen-based system. xVM _does_ require the VT hardware > extensions to run guest OSes in an unmodified form, which currently > includes all flavors of Windows. >Ah, Marketing rebranding befuddles again. It''s Sun xVM VirtualBox (tm) as best I can tell from sun.com. So I assumed you were using the xVM in generic sense, not as Xen vs. Virtual Box. -- Keith H. Bierman khbkhb at gmail.com | AIM kbiermank 5430 Nassau Circle East | Cherry Hills Village, CO 80113 | 303-997-2749 <speaking for myself*> Copyright 2008
> (2) You want a 64-bit CPU. So that probably rules > out your P4 machines, > unless they were extremely late-model P4s with the > EM64T features. > Given that file-serving alone is relatively low-CPU, > you can get away > with practically any 64-bit capable CPU made in the > last 4 years.Assuming you''re topped out at 4GB RAM by your motherboard, how much difference does 32bit vs 64bit make? Some of the old (cheap) SCSI cards I have only have 32 bit drivers on x86. The cards supported in the 64bit kernel I have to buy, maybe for more then I paid for my motherboard+cpu+RAM. So I''m curious about that point.> (5) external cases/enclosures are expensive, but > nice. The bang-for-buck > is in the "workgroup server" case, which (besides > being a PC case) > generally holds 8-10 drives for about $300 or so.Even more bang for the buck: PC case with power supply 42" SATA cables SATA to slot bracket cables disk drive power adapters to SATA power adapters 1-2 120mm fans Bolt the drives into the case. Plug in the power adapters Bolt the SATA to slot brackets in Plug the SATA connectors into the drives. Mount the fan(s) in the case so it pulls air across the drives. Bolt the box up. Now you have drives in a box with SATA ports out the back. Now plug the 42" cables into your server in the SATA ports and feed them out a hole in the case. I label the port numbers on the cables. Plug the 42" cables into the SATA ports of the drives in a box. Power the drives in a box up before your server. I''ve found this works. SATA isn''t sensitive to setup as SCSI was in the past. You did mention this was for a home lab right? I''ve been running this way in my servers for 4 years.> If the solution you really want is an external disk > enclosure hooked to > some sort of a driver/head machine, check out > used/off-lease IBM or HP > opteron workstations, which tend to go for $500 or > so, loaded. Sun v20z > and IBM e326m 1U rackmount servers are in the same > price range. > > -- > Erik Trimble > Java System SupportThis message posted from opensolaris.org
> Timely discussion. I too am trying to build a stable yet inexpensive storage > server for my home lab[...]> Other options are that I build a whitebox or buy a new PowerEdge or Sun > X2200 etcIf this is really just a lab storage server then an X2100M2 will be enough. Just get the minimum spec, buy two 3.5" SATA-II disks (I guess the sweet spot is 750GB right now), and buy 8GB of third party memory to max out the box for ZFS. Then set up a ZFS-rooted Nevada and you''re in business. Depending on your requirements, you have slightly over 1.3TB capacity, or about 690GB mirrored. I have just such a machine and am very happy. I do run S10U5 on it since I need the box for other things, too. So I don''t have ZFS root. HTH -- Volker -- ------------------------------------------------------------------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: vab at bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgr??e: 45 Gesch?ftsf?hrer: Rainer J. H. Brandt und Volker A. Brandt