On Fri, September 8, 2017 12:56 pm, hw wrote:> Valeri Galtsev wrote: >> >> On Fri, September 8, 2017 9:48 am, hw wrote: >>> m.roth at 5-cent.us wrote: >>>> hw wrote: >>>>> Mark Haney wrote: >>>> <snip> >>>>>> BTRFS isn't going to impact I/O any more significantly than, say, >>>>>> XFS. >>>>> >>>>> But mdadm does, the impact is severe. I know there are ppl saying >>>>> otherwise, but I????ve seen the impact myself, and I definitely >>>>> don????t >>>>> want >>>>> it on that particular server because it would likely interfere with >>>>> other services. >>>> <snip> >>>> I haven't really been following this thread, but if your requirements >>>> are >>>> that heavy, you're past the point that you need to spring some money >>>> and >>>> buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought >>>> them >>>> more recently? >>> >>> Heavy requirements are not required for the impact of md-RAID to be >>> noticeable. >>> >>> Hardware RAID is already in place, but the SSDs are "extra" and, as I >>> said, >>> not suited to be used with hardware RAID. >> >> Could someone, please, elaborate on the statement that "SSDs are not >> suitable for hardware RAID". > > When you search for it, you??ll find that besides wearing out undesirably > fast --- which apparently can be contributed mostly to less overcommitment > of the drive --- you may also experience degraded performance over time > which can be worse than you would get with spinning disks, or at least not > much better.Thanks. That seems to clear fog a little bit. I still would like to hear manufacturers/models here. My choices would be: Areca or LSI (bought out by Intel, so former LSI chipset and microcode/firmware) and as SSD Samsung Evo SATA III. Does anyone who used these in hardware RAID can offer any bad experience description? I am kind of shying away from "crap" hardware which in a long run is more expensive, even though looks cheaper (Pricegrabber is your enemy - I would normally say to my users). So, I never would consider using poorly/cheaply designed hardware in some setup (e.g. hardware RAID based storage) one expects performance from. Am I still taking chance hitting "bad" hardware RAID + SSD combination? Just curious where we actually stand. Thanks again for fruitful discussion! Valeri> > Add to that the firmware being designed for an entirely different > application > and having bugs, and your experiences with surprisingly incompatible > hardware, > and you can imagine that using an SSD not designed for hardware RAID > applications with hardware RAID is a bad idea. There is a difference like > night and day between "consumer hardware" and hardware you can actually > use, > and that is not only the price you pay for it. > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos >++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On 9/8/2017 12:52 PM, Valeri Galtsev wrote:> Thanks. That seems to clear fog a little bit. I still would like to hear > manufacturers/models here. My choices would be: Areca or LSI (bought out > by Intel, so former LSI chipset and microcode/firmware) and as SSD Samsung > Evo SATA III. Does anyone who used these in hardware RAID can offer any > bad experience description?Does the Samsung EVO have supercaps and write-back buffer protection?? if not, it is in NO way suitable for reliable use in a raid/server environment. as far as raiding SSDs go, the ONLY raid I'd use with them is raid1 mirroring (or if more than 2, raid10 striped mirrors). And I'd probably do it with OS based software raid, as thats more likely to support SSD trim than a hardware raid card, plus allows the host to monitor the SSDs via SMART, which a hardware raid card probably hides. I'd also make sure I undercommit the size of the SSD, so if its a 500GB SSD, I'd make absolutely sure to never have more than 300-350GB of data on it.?? if its part of a stripe set, the only way to ensure this is to partition it so the raid slice is only 300-350GB. -- john r pierce, recycling bits in santa cruz
On Fri, Sep 8, 2017 at 2:52 PM, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote:> > manufacturers/models here. My choices would be: Areca or LSI (bought out > by Intel, so former LSI chipset and microcode/firmware) and as SSD Samsung >Intel only purchased the networking component of LSI, Axxia, from Avago. The RAID division was merged into Broadcom (post Avago merger).
On Fri, September 8, 2017 3:06 pm, John R Pierce wrote:> On 9/8/2017 12:52 PM, Valeri Galtsev wrote: >> Thanks. That seems to clear fog a little bit. I still would like to hear >> manufacturers/models here. My choices would be: Areca or LSI (bought out >> by Intel, so former LSI chipset and microcode/firmware) and as SSD >> Samsung >> Evo SATA III. Does anyone who used these in hardware RAID can offer any >> bad experience description? > > > Does the Samsung EVO have supercaps and write-back buffer protection??? > if not, it is in NO way suitable for reliable use in a raid/server > environment.With all due respect, John, this is the same as hard drive cache is not backed up power wise for a case of power loss. And hard drives all lie about write operation completed before data actually are on the platters. So we can claim the same: hard drives are not suitable for RAID. I implied to find out from experts in what respect they claim SSDs are unsuitable for hardware RAID as opposed to mechanical hard drives. Am I missing something?> > as far as raiding SSDs go, the ONLY raid I'd use with them is raid1 > mirroring (or if more than 2, raid10 striped mirrors). And I'd probably > do it with OS based software raid, as thats more likely to support SSD > trim than a hardware raid card, plus allows the host to monitor the SSDs > via SMART, which a hardware raid card probably hides.Good, thanks. My 3ware RAIDs through their 3dm daemon do warn me about SMART status: fail (meaning the drive though working should according to SMART be replaced ASAP). Not certain off hand about LSI ones (one should be able to query them through command line client utility).> > I'd also make sure I undercommit the size of the SSD, so if its a 500GB > SSD, I'd make absolutely sure to never have more than 300-350GB of data > on it.???? if its part of a stripe set, the only way to ensure this is to > partition it so the raid slice is only 300-350GB.Great point! And one may want to adjust stripe size to be resembling SSDs internals, as default is for hard drives, right? Thanks, John, that was instructive! Valeri> > > -- > john r pierce, recycling bits in santa cruz > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos >++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
Valeri Galtsev wrote:> Thanks. That seems to clear fog a little bit. I still would like to hear > manufacturers/models here. My choices would be: Areca or LSI (bought out > by Intel, so former LSI chipset and microcode/firmware) and as SSD Samsung > Evo SATA III. Does anyone who used these in hardware RAID can offer any > bad experience description?It depends on your budget and on the hardware you plan to use the controller with, and on what you?re intending to do. I wouldn?t recommend using SSDs that are not explicitly rated for use with hardware RAID with hardware RAID. Samsung seems to have firmware bugs that makes the kernel/btrfs disable some features. I?d go with Intel SSDs and either use md-RAID or btrfs, but the reliability of btrfs is questionable, and md-RAID has a performance penalty.