Hi, I'm looking at creating a large home use storage machine. Budget is a concern, but size and reliability are also a priority. Noise is also a concern, since this will be at home, in the basement. That, and cost, pretty much rules out a commercial case, such as a 3U case. It would be nice, but it greatly inflates the budget. This pretty much restricts me to a tower case. The primary use of this machine will be a backup server[1]. It will do other secondary use will include minor tasks such as samba, CIFS, cvsup, etc. I'm thinking of 8x1TB (or larger) SATA drives. I've found a case[2] with hot-swap bays[3], that seems interesting. I haven't looked at power supplies, but given that number of drives, I expect something beefy with a decent reputation is called for. Whether I use hardware or software RAID is undecided. I I think I am leaning towards software RAID, probably ZFS under FreeBSD 8.x but I'm open to hardware RAID but I think the cost won't justify it given ZFS. Given that, what motherboard and RAM configuration would you recommend to work with FreeBSD [and probably ZFS]. The lists seems to indicate that more RAM is better with ZFS. Thanks. [1] - FYI running Bacula, but that's out of scope for this question [2] - http://www.newegg.com/Product/Product.aspx?Item=N82E16811192058 [3] - nice to have, especially for a failure.
On Mon, 8 Feb 2010, Dan Langille wrote:> Given that, what motherboard and RAM configuration would you > recommend to work with FreeBSD [and probably ZFS]. ?The lists seems > to indicate that more RAM is better with ZFS.I have something similar (5x1Tb) - I have a Gigabyte GA-MA785GM-US2H with an Athlon X2 and 4Gb of RAM (only half filled - 2x2Gb) The board has 5 SATA ports + 1 eSATA (I looped that back into the case to connect to the DVD drive :). I boot it off a 4Gb CF card in an IDE adapter. I think you could boot off ZFS but it seemed a bit unreliable when I installed it so I opted for a more straightforward method. The CPU fan is fairly quiet (although a 3rd party one would probably be quieter) and the rest of the motherboard is fanless. The onboard video works great with radeonhd (it's a workstation for someone as well as a file server). Note that it doesn't support ECC, I don't know if that is a problem. -- Daniel O'Connor software and network engineer for Genesis Software - http://www.gsoft.com.au "The nice thing about standards is that there are so many of them to choose from." -- Andrew Tanenbaum GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 188 bytes Desc: This is a digitally signed message part. Url : http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20100208/ad7534ca/attachment.pgp
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 08.02.2010 06:01, Dan Langille wrote:> Hi, > > I'm looking at creating a large home use storage machine. Budget is a > concern, but size and reliability are also a priority. Noise is also a > concern, since this will be at home, in the basement. That, and cost, > pretty much rules out a commercial case, such as a 3U case. It would be > nice, but it greatly inflates the budget. This pretty much restricts me > to a tower case. > > The primary use of this machine will be a backup server[1]. It will do > other secondary use will include minor tasks such as samba, CIFS, cvsup, > etc. > > I'm thinking of 8x1TB (or larger) SATA drives. I've found a case[2] > with hot-swap bays[3], that seems interesting. I haven't looked at > power supplies, but given that number of drives, I expect something > beefy with a decent reputation is called for. > > Whether I use hardware or software RAID is undecided. I > > I think I am leaning towards software RAID, probably ZFS under FreeBSD > 8.x but I'm open to hardware RAID but I think the cost won't justify it > given ZFS. > > Given that, what motherboard and RAM configuration would you recommend > to work with FreeBSD [and probably ZFS]. The lists seems to indicate > that more RAM is better with ZFS.Just before christmas, I rebuilt my own storage backend server for my home, so I've had a recent look at "what's there". Some hardware I had from the old solution, and some were new. Some of it is a tad more expensive that what you gave as the idea here, but the logic is (mostly) the same. I'll also include what replacements for some of the old parts I'm looking at. Heirlooms of the old server: - -Disks (four Samsung HD501LJ, Four Seagate ST31500341AS) - -Disk Controller AMI/Lsilogic Megaraid SAS 8308ELP (8chan MFI) The new hardware around this: - -Chieftec UNB-410F-B - -Two Chieftec SST3141SAS - -Chieftec APS-850C (850watt modular power) - -Intel E7500 CPU using the bundled stock cooler, and arcticsilver paste - -4 2GB Corasair Valueram DDR2 1066 sticks - -Asus P5Q Premium mainboard - -LSI SAS3801E (for the tape autoloader) - -Some old graphics board (unless you need a lot of fancy 3D stuff, use what you have around that's not ESD-damaged here). Should I have started from scratch, I'd have used Seagate 2TB "Green" disks, due to the lower temperatures and powerconsumption of these. And that's about the only thing I'd do differently. The MFI controller (Megaraid) would stay, simply because it has built in logic to periodically do patrolreading and consistency checks, and I've had previous experiences with the raid-controllers checks discovering bad disks before they go critical. But this breed of controllers is a little costly (Customers are willing to pay for the features, so the manufacturer milks them for all they can). I recommend you go for a modular power, that is rated for quite a bit more that what you expect to draw from it. The reason is that as current increases, the efficiency of the conversion drops, so a power running at half its rated max, is more efficient than one pushed to the limits. Go for modular so you don't have to have the extra cables tied into coils inside your machine distruption airflow (and creating EMF noise). Make sure you get yourself a proper ESD wriststrap (or anklestrap) before handling any of these components, and make sure you use correct torque for all the screws handling the components (and disks). This machine will probably have a lot of uptime, and disks (and fans) create vibrations. If in doubt, use some fancy-colored nailpolish (or locktite) on the screws to make sure they don't unscrew from vibrations over time. (a loose screw has a will of it own, and WILL short-circuit the most expensive component in your computer). Also make sure you use cableties to get the cables out of the airflow, so you get sufficient cooling. Speaking of cooling, make sure your air-inputs have some sort of filtering, or you'll learn where Illiad (userfriendly.org) got the idea for "Dustpuppy". No matter how pedantic you are about cleaning your house, a computer is basically a large, expensive, vacuum-cleaner and WILL suck in dust from the air. These are some of the pointers I'd like to share on the subject. :) //Svein - -- - --------+-------------------+------------------------------- /"\ |Svein Skogen | svein@d80.iso100.no \ / |Solberg ?stli 9 | PGP Key: 0xE5E76831 X |2020 Skedsmokorset | svein@jernhuset.no / \ |Norway | PGP Key: 0xCE96CE13 | | svein@stillbilde.net ascii | | PGP Key: 0x58CD33B6 ribbon |System Admin | svein-listmail@stillbilde.net Campaign|stillbilde.net | PGP Key: 0x22D494A4 +-------------------+------------------------------- |msn messenger: | Mobile Phone: +47 907 03 575 |svein@jernhuset.no | RIPE handle: SS16503-RIPE - --------+-------------------+------------------------------- If you really are in a hurry, mail me at svein-mobile@stillbilde.net This mailbox goes directly to my cellphone and is checked even when I'm not in front of my computer. - ------------------------------------------------------------ Picture Gallery: https://gallery.stillbilde.net/v/svein/ - ------------------------------------------------------------ -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.12 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAktv3sUACgkQODUnwSLUlKSdCQCcDzIFDv4zSRmPwYP3XhxQyIBe Tc0AnikVuqUs0IO1Z6bcaeLJWjXJ2jVv =zV8R -----END PGP SIGNATURE-----
Dan Langille wrote:> Hi, > > I'm looking at creating a large home use storage machine. Budget is a > concern, but size and reliability are also a priority. Noise is also a > concern, since this will be at home, in the basement. That, and cost, > pretty much rules out a commercial case, such as a 3U case. It would be > nice, but it greatly inflates the budget. This pretty much restricts me > to a tower case. > > The primary use of this machine will be a backup server[1]. It will do > other secondary use will include minor tasks such as samba, CIFS, cvsup, > etc.It depends on your needs (storage capacity [number of drives], performance etc.) One year ago I purchased HP ProLiant ML110G5 / P2160 / 1GB / 250GB SATA / DVDRW / Tower / (with 3 years Next Business Day support!). It is sold for about 9000,- CZK ($500), I added next 4GB of RAM, 4x 1TB Samsung F1 instead of original 250GB Seagate. System is booted from 2GB internal USB flash drive and all drives are in RAIDZ pool. The machine is really quiet. All in all cost is about $1000 with 3 years NBD. You can put in 2TB drives instead of 1TB drives. It is really low end machine, but runs without problems for more than a year. Miroslav Lachman
While I'm not a heavy FreeBSD user I can offer you some advice on hardware at least based on my own experience. If you want things to work as good as possible go with Intel chipset and LAN. AMD chipsets works (mostly) but you'll have worse performance and you wont get an Intel NIC which performs much better than Realtek or Attansic. which you usually find on AMD motherboards. A general tip is to go for business chipsets as Intel like to call them, Q35 (I have a few of those and they work very good), Q45 and Q57. By doing so you can be sure to get Intel NIC and they aren't much more expensive than your average motherboard and also usually carries some kind of remote management. Having in mind that FreeBSD may/may not support the newest hardware around I'd guess that Q57 needs -CURRENT for now but I would highly recommend it as Socket 775 is slowly dying. ASUS P7Q57-M DO looks like a very nice board if you want "bleeding edge" have in mind though as time of writing support for NIC doesn't seem to be in FreeBSD but I guess its a short matter of time (82578DM). Pair it with the slowest Core i3 CPU you can find and you have a very nice solution. If you step up to i5 you get hardware encryption =) If you want legacy Intel DQ45CB should be a pretty nice choice with supported LAN out of the box. Intel Pentium E6300 should be more than enough for storage. Both MSI and Gigabyte also makes Q-chipsets motherboards but they don't seem to widely available in the US and their boards should be fine too. Since you want to have more than 5 HDDs you need a controller card of some sort, in that case I would recommend you to have a look at the Supermicro ones mentioned in the post. http://forums.freebsd.org/showpost.php?p=59735&postcount=5 UIO is just a backwards PCIe slot so turning it around till make it fit although you mean need to secure it somehow. They may be a bit hard to find but you can find a few sellers on eBay too. What I don't know is how the motherboards will react if you pop one in which you need to do some research on. As for memory you'll need at least 2Gb but 4Gb is highly recommended if you're going to use ZFS. Just make sure the sticks follows JEDEC standards and you'll be fine (Corsair Value Select series or stock Crucial are fine). //Daniel
On Mon, 8 Feb 2010, Dan Langille wrote:> Hi, > > I'm looking at creating a large home use storage machine. Budget is a > concern, but size and reliability are also a priority. Noise is also a > concern, since this will be at home, in the basement. That, and cost, pretty > much rules out a commercial case, such as a 3U case. It would be nice, but > it greatly inflates the budget. This pretty much restricts me to a tower > case.I recently had to put together something very cheap for a client for disk-only backups (rsync + zfs snapshots). As you noticed, rack enclosures that will hold a bunch of drives are insanely expensive. I put my "wishlist" from NewEgg below. While the $33 case is a bit flimsy, the extra high-cfm fan in the back and the fan that sits in front of the drive bays keeps the drives extremely cool. For $33, I lucked out.> The primary use of this machine will be a backup server[1]. It will do other > secondary use will include minor tasks such as samba, CIFS, cvsup, etc. > > I'm thinking of 8x1TB (or larger) SATA drives. I've found a case[2] with > hot-swap bays[3], that seems interesting. I haven't looked at power > supplies, but given that number of drives, I expect something beefy with a > decent reputation is called for.For home use is the hot-swap option really needed? Also, it seems like people who use zfs (or gmirror + gstripe) generally end up buying pricey hardware raid cards for compatibility reasons. There seem to be no decent add-on SATA cards that play nice with FreeBSD other than that weird supermicro card that has to be physically hacked about to fit. I did "splurge" for a server-class board from Supermicro since I wanted bios serial port console redirection, and as many SATA ports on-board that I could find.> Whether I use hardware or software RAID is undecided. I > > I think I am leaning towards software RAID, probably ZFS under FreeBSD 8.x > but I'm open to hardware RAID but I think the cost won't justify it given > ZFS.I've had two very different ZFS experiences so far. On the hardware I mention in this message, I had zero problems and excellent performance (bonnie++ showing 145MB/s reads, 132MB/s writes on a 4 disk RAIDZ1 array) with 8.0/amd64 w/4GB of RAM. I did no "tuning" at all - amd64 is the way to go for ZFS. On an old machine at home with 2 old (2003 era) 32-bit xeons, I ran into all the issues people see with i386+ZFS - kernel memory exhaustion resulting in a panic, screwing around with an old 3Ware RAID card (JBOD mode) that cannot properly scan for new drives, just a total mess without lots of futzing about.> Given that, what motherboard and RAM configuration would you recommend to > work with FreeBSD [and probably ZFS]. The lists seems to indicate that more > RAM is better with ZFS.Here's the list: http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=8441629 Just over $1K, and I've got 4 nice drives, ECC memory, and a server board. Going with the celeron saved a ton of cash with no impact on ZFS that I can discern, and again, going with a cheap tower case slashed the cost as well. That whole combo works great. Now when I use up those 6 SATA ports, I don't know how to get more cheaply, but I'll worry about that later... Charles> Thanks. > > > [1] - FYI running Bacula, but that's out of scope for this question > > [2] - http://www.newegg.com/Product/Product.aspx?Item=N82E16811192058 > > [3] - nice to have, especially for a failure. > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" >
CS> pricey hardware raid cards for compatibility reasons. There seem to CS> be no decent add-on SATA cards that play nice with FreeBSD other than CS> that weird supermicro card that has to be physically hacked about to CS> fit. BTW: I recently built some more machines with this card. I can confirm now that you can use it with "standard" brackets, if you have some spare. The distance for the two holders is the same as for e.g. 3ware 95/96 controllers and I had some spares in standard height there because I use the 3wares in low profile setups. The brackets of Intel NICs seem to fit, too. The only thing that is different with the card now is the side on which the components are mounted. But this should not be a problem unless you want to place them next ti a graphics card. cu Gerrit
On 2/8/2010 12:01 AM, Dan Langille wrote:> Hi, > > I'm thinking of 8x1TB (or larger) SATA drives. I've found a case[2] with > hot-swap bays[3], that seems interesting. I haven't looked at power > supplies, but given that number of drives, I expect something beefy with > a decent reputation is called for.I have a system with two of these [1] and an 8 port LSI SAS card that runs fine for me. I run an 8 drive ZFS array off the LSI card and then have 2 drives mirrored off the motherboard SATA ports for booting with ZFS. Hotswap works fine for me as well with this hardware. Jonathan http://www.newegg.com/Product/Product.aspx?Item=N82E16816215001
On Mon, 8 Feb 2010, Dan Langille wrote: DL> I'm looking at creating a large home use storage machine. Budget is a DL> concern, but size and reliability are also a priority. Noise is also a DL> concern, since this will be at home, in the basement. That, and cost, DL> pretty much rules out a commercial case, such as a 3U case. It would be DL> nice, but it greatly inflates the budget. This pretty much restricts me to DL> a tower case. [snip] We use the following at work, but it's still pretty cheap and pretty silent: Chieftec WH-02B-B (9x5.25 bays) filled with 2 x Supermicro CSE-MT35T http://www.supermicro.nl/products/accessories/mobilerack/CSE-M35T-1.cfm for regular storage, 2 x raidz1 1 x Promise SuperSwap 1600 http://www.promise.com/product/product_detail_eng.asp?product_id=169 for changeable external backups and still have 2 5.25 bays for anything interesting ;-) other parts are regular SocketAM2+ motherboard, Athlon X4, 8G ram, FreeBSD/amd64 -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------
Dan Langille wrote:> Hi, > > I'm looking at creating a large home use storage machine. Budget is a > concern, but size and reliability are also a priority. Noise is also a > concern, since this will be at home, in the basement. That, and cost, > pretty much rules out a commercial case, such as a 3U case. It would be > nice, but it greatly inflates the budget. This pretty much restricts me > to a tower case. > > The primary use of this machine will be a backup server[1]. It will do > other secondary use will include minor tasks such as samba, CIFS, cvsup, > etc. > > I'm thinking of 8x1TB (or larger) SATA drives. I've found a case[2] > with hot-swap bays[3], that seems interesting. I haven't looked at > power supplies, but given that number of drives, I expect something > beefy with a decent reputation is called for. > > Whether I use hardware or software RAID is undecided. I > > I think I am leaning towards software RAID, probably ZFS under FreeBSD > 8.x but I'm open to hardware RAID but I think the cost won't justify it > given ZFS. > > Given that, what motherboard and RAM configuration would you recommend > to work with FreeBSD [and probably ZFS]. The lists seems to indicate > that more RAM is better with ZFS. > > Thanks. > > > [1] - FYI running Bacula, but that's out of scope for this question > > [2] - http://www.newegg.com/Product/Product.aspx?Item=N82E16811192058 > > [3] - nice to have, especially for a failure.After creating three different system configurations (Athena, Supermicro, and HP), my configuration of choice is this Supermicro setup: 1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping) 2. SuperMicro 5046A $750 (+$43 shipping) 3. LSI SAS 3081E-R $235 4. SATA cables $60 5. Crucial 3?2G ECC DDR3-1333 $191 (+ $6 shipping) 6. Xeon W3520 $310 Total price with shipping $1560 Details and links at http://dan.langille.org/2010/02/14/supermicro/ I'll probably start with 5 HDD in the ZFS array, 2x gmirror'd drives for the boot, and 1 optical drive (so 8 SATA ports).
> On Sun, 14 Feb 2010, Dan Langille wrote: >> After creating three different system configurations (Athena, >> Supermicro, and HP), my configuration of choice is this Supermicro >> setup: >> >> 1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping) >> 2. SuperMicro 5046A $750 (+$43 shipping) >> 3. LSI SAS 3081E-R $235 >> 4. SATA cables $60 >> 5. Crucial 3?2G ECC DDR3-1333 $191 (+ $6 shipping) >> 6. Xeon W3520 $310You do realise how much of a massive overkill this is and how much you are overspending? - Dan Naumov
Steve Polyack wrote:> On 2/10/2010 12:02 AM, Dan Langille wrote: >> Don't use a port multiplier and this goes away. I was hoping to avoid >> a PM and using something like the Syba PCI Express SATA II 4 x Ports >> RAID Controller seems to be the best solution so far. >> >> http://www.amazon.com/Syba-Express-Ports-Controller-SY-PEX40008/dp/B002R0DZWQ/ref=sr_1_22?ie=UTF8&s=electronics&qid=1258452902&sr=1-22 > > Dan, I can personally vouch for these cards under FreeBSD. We have 3 of > them in one system, with almost every port connected to a port > multiplier (SiI5xxx PMs). Using the siis(4) driver on 8.0-RELEASE > provides very good performance, and supports both NCQ and FIS-based > switching (an essential for decent port-multiplier performance). > > One thing to consider, however, is that the card is only single-lane > PCI-Express. The bandwidth available is only 2.5Gb/s (~312MB/sec, > slightly less than that of the SATA-2 link spec), so if you have 4 > high-performance drives connected, you may hit a bottleneck at the > bus. I'd be particularly interested if anyone can find any similar > Silicon Image SATA controllers with a PCI-E 4x or 8x interface ;)Here is SiI3124 based card with built-in PCIe x8 bridge: http://www.addonics.com/products/host_controller/adsa3gpx8-4em.asp It is not so cheap, but with 12 disks connected via 4 Port Multipliers it can give up to 1GB/s (4x250MB/s) of bandwidth. Cheaper PCIe x1 version mentioned above gave me up to 200MB/s, that is maximum of what I've seen from PCIe 1.0 x1 controllers. Looking on NCQ and FBS support it can be enough for many real-world applications, that don't need so high linear speeds, but have many concurrent I/Os. -- Alexander Motin
Dan Naumov wrote:> On Sun, Feb 14, 2010 at 11:38 PM, Dan Langille <dan@langille.org> wrote: >> Dan Naumov wrote: >>>> On Sun, 14 Feb 2010, Dan Langille wrote: >>>>> After creating three different system configurations (Athena, >>>>> Supermicro, and HP), my configuration of choice is this Supermicro >>>>> setup: >>>>> >>>>> 1. Samsung SATA CD/DVD Burner $20 (+ $8 shipping) >>>>> 2. SuperMicro 5046A $750 (+$43 shipping) >>>>> 3. LSI SAS 3081E-R $235 >>>>> 4. SATA cables $60 >>>>> 5. Crucial 3?2G ECC DDR3-1333 $191 (+ $6 shipping) >>>>> 6. Xeon W3520 $310 >>> You do realise how much of a massive overkill this is and how much you >>> are overspending? >> >> I appreciate the comments and feedback. I'd also appreciate alternative >> suggestions in addition to what you have contributed so far. Spec out the >> box you would build. > > =====================> Case: Fractal Design Define R2 - 89 euro - > http://www.fractal-design.com/?view=product&prod=32That is a nice case. It's one slot short for what I need. The trays are great. I want three more slots for 2xSATA for a gmirror base-OS and an optical drive. As someone mentioned on IRC, there are many similar non hot-swap cases. From the website, I couldn't see this for sale in USA. But converting your price, to US$, it is about $121. Looking around, this case was suggested to me. I like it a lot: LIAN LI PC-A71F Black Aluminum ATX Full Tower Computer Case $240 http://www.newegg.com/Product/Product.aspx?Item=N82E16811112244> Mobo/CPU: Supermicro X7SPA-H / Atom D510 - 180-220 euro - > http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=HNon-ECC RAM, which is something I'd like to have. $175> PSU: Corsair 400CX 80+ - 59 euro -> http://www.corsair.com/products/cx/default.aspx http://www.newegg.com/Product/Product.aspx?Item=N82E16817139008 for $50 Is that sufficient power up to 10 SATA HDD and an optical drive? > RAM: Corsair 2x2GB, DDR2 800MHz SO-DIMM, CL5 - 85 euro http://www.newegg.com/Product/Product.aspx?Item=N82E16820145238 $82> =====================> Total: ~435 euroWith my options, it's about $640 with shipping etc.> The motherboard has 6 native AHCI-capable ports on ICH9R controller > and you have a PCI-E slot free if you want to add an additional > controller card. Feel free to blow the money you've saved on crazy > fast SATA disks and if your system workload is going to have a lot of > random reads, then spend 200 euro on a 80gb Intel X25-M for use as a > dedicated L2ARC device for your pool.I have been playing with the idea of an L2ARC device. They sound crazy cool. Thank you Dan. -- dan
>> PSU: Corsair 400CX 80+ - 59 euro - > >> http://www.corsair.com/products/cx/default.aspx > > http://www.newegg.com/Product/Product.aspx?Item=N82E16817139008 for $50 > > Is that sufficient power up to 10 SATA HDD and an optical drive?Disk power use varies from about 8 watt/disk for "green" disks to 20 watt/disk for really powerhungry ones. So yes. - Sincerely, Dan Naumov
Dan Langille wrote:> Dan Naumov wrote: >> Now add an >> additional PCI-E SATA controller card, like the often mentioned PCIE >> SIL3124. > > http://www.newegg.com/Product/Product.aspx?Item=N82E16816124026 for $35This is PCI-X version. Unless you have PCI-X slot, PCIe x1 version seems preferable: http://www.newegg.com/Product/Product.aspx?Item=N82E16816124027 -- Alexander Motin
On Mon, 15 Feb 2010, Dan Naumov wrote: DN> >> PSU: Corsair 400CX 80+ - 59 euro - DN> > DN> >> http://www.corsair.com/products/cx/default.aspx DN> > DN> > http://www.newegg.com/Product/Product.aspx?Item=N82E16817139008 for $50 DN> > DN> > Is that sufficient power up to 10 SATA HDD and an optical drive? DN> DN> Disk power use varies from about 8 watt/disk for "green" disks to 20 DN> watt/disk for really powerhungry ones. So yes. The only thing one should be aware that startup current on contemporary 3.5 SATA disks would exceed 2.5A on 12V buse, so delaying plate startup is rather vital. Or get 500-520 VA PSU to be sure. Or do both just to be on the safe side ;-) -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------
> I had a feeling someone would bring up L2ARC/cache devices. This gives > me the opportunity to ask something that's been on my mind for quite > some time now: > > Aside from the capacity different (e.g. 40GB vs. 1GB), is there a > benefit to using a dedicated RAM disk (e.g. md(4)) to a pool for > L2ARC/cache? The ZFS documentation explicitly states that cache > device content is considered volatile.Using a ramdisk as an L2ARC vdev doesn't make any sense at all. If you have RAM to spare, it should be used by regular ARC. - Sincerely, Dan Naumov
Ulf Zimmermann wrote:> On Sun, Feb 14, 2010 at 07:33:07PM -0500, Dan Langille wrote: >>> Get a dock for holding 2 x 2,5" disks in a single 5,25" slot and put >>> it at the top, in the only 5,25" bay of the case. >> That sounds very interesting. I just looking around for such a thing, >> and could not find it. Is there a more specific name? URL? > > I had an Addonics 5.25" frame for 4x 2.5" SAS/SATA but the small fans in it > are unfortunatly of the cheap kind. I ended up using the 2x2.5" to 3.5" > frame from Silverstone (for the small Silverstone case I got).Ahh, something like this: http://silverstonetek.com/products/p_contents.php?pno=SDP08&area=usa I understand now. Thank you.
Dan Langille wrote:> On Wed, February 10, 2010 10:00 pm, Bruce Simpson wrote: >> On 02/10/10 19:40, Steve Polyack wrote: >>> I haven't had such bad experience as the above, but it is certainly a >>> concern. Using ZFS we simply 'offline' the device, pull, replace with >>> a new one, glabel, and zfs replace. It seems to work fine as long as >>> nothing is accessing the device you are replacing (otherwise you will >>> get a kernel panic a few minutes down the line). mav@FreeBSD.org has >>> also committed a large patch set to 9-CURRENT which implements >>> "proper" SATA/AHCI hot-plug support and error-recovery through CAM. >> I've been running with this patch in 8-STABLE for well over a week now >> on my desktop w/o issues; I am using main disk for dev, and eSATA disk >> pack for light multimedia use. > > MFC to 8.x?Merged. -- Alexander Motin
On Tue, 16 Feb 2010 13:30, mattjreimer@ wrote:> On Mon, Feb 15, 2010 at 5:49 PM, jhell <jhell@dataix.net> wrote: > >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> It is funny that you guys are all of a sudden talking about this, as I was >> just working on some modifications to the arc_summary.pl script for some >> better formatting and inclusion of kmem statistics. >> >> My intent on the modifications is to make the output more usable to the >> whole community by revealing the relevant system information that can be >> included in an email to the lists for diagnosis by others. >> > ... > >> Example output: >> - --------------------------------------------------------------------- >> System Summary >> OS Revision: 199506 >> OS Release Date: 703100 >> Hardware Platform: i386 >> Processor Architecture: i386 >> Storage pool Version: 13 >> Filesystem Version: 3 >> >> Kernel Memory Usage >> TEXT: 8950208 KiB, 8 MiB >> DATA: 206347264 KiB, 196 MiB >> TOTAL: 215297472 KiB, 205 MiB >> > > Above did you really mean "8950208 B" not KiB, etc.? > > Matt >Yes, Thank you for pointing this out. I have fixed this and it will be added to the the same url as before in about 3 or so hours from the time of this email. This update will also add some stats for L2 ARC as well. Thanks again. -- jhell