On Fri, September 8, 2017 9:48 am, hw wrote:> m.roth at 5-cent.us wrote: >> hw wrote: >>> Mark Haney wrote: >> <snip> >>>> BTRFS isn't going to impact I/O any more significantly than, say, XFS. >>> >>> But mdadm does, the impact is severe. I know there are ppl saying >>> otherwise, but I??ve seen the impact myself, and I definitely don??t >>> want >>> it on that particular server because it would likely interfere with >>> other services. >> <snip> >> I haven't really been following this thread, but if your requirements >> are >> that heavy, you're past the point that you need to spring some money and >> buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought them >> more recently? > > Heavy requirements are not required for the impact of md-RAID to be > noticeable. > > Hardware RAID is already in place, but the SSDs are "extra" and, as I > said, > not suited to be used with hardware RAID.Could someone, please, elaborate on the statement that "SSDs are not suitable for hardware RAID". Thanks. Valeri> > It remains to be tested how the hardware RAID performs, which may be even > better than the SSDs. > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos >++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On 8 September 2017 at 11:00, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote:> > On Fri, September 8, 2017 9:48 am, hw wrote: >> m.roth at 5-cent.us wrote: >>> hw wrote: >>>> Mark Haney wrote: >>> <snip> >>>>> BTRFS isn't going to impact I/O any more significantly than, say, XFS. >>>> >>>> But mdadm does, the impact is severe. I know there are ppl saying >>>> otherwise, but I??ve seen the impact myself, and I definitely don??t >>>> want >>>> it on that particular server because it would likely interfere with >>>> other services. >>> <snip> >>> I haven't really been following this thread, but if your requirements >>> are >>> that heavy, you're past the point that you need to spring some money and >>> buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought them >>> more recently? >> >> Heavy requirements are not required for the impact of md-RAID to be >> noticeable. >> >> Hardware RAID is already in place, but the SSDs are "extra" and, as I >> said, >> not suited to be used with hardware RAID. > > Could someone, please, elaborate on the statement that "SSDs are not > suitable for hardware RAID". >It will depend on the type of SSD and the type of hardware RAID. There are at least 4 different classes of SSD drives with different levels of cache, write/read performance, number of lifetime writes, etc. There are also multiple types of hardware RAID. A lot of hardware RAID will try to even out disk usage in different ways. This means 'moving' the heavily used data from slow parts to fast parts etc etc. On an SSD all these extra writes aren't needed and so if the hardware RAID doesn't know about SSD technology it will wear out the SSD quickly. Other hardware raid parts that can cause faster failures on SSD's are where it does test writes all the time to see if disks are bad etc. Again if you have gone with commodity SSD's this will wear out the drive faster than expected and boom bad disks. That said, some hardware RAID's are supposedly made to work with SSD drive technology. They don't do those extra writes, they also assume that the disks underneath will read/write in near constant time so queueing of data is done differently. However that stuff costs extra money and not usually shipped in standard OEM hardware.> Thanks. > Valeri > >> >> It remains to be tested how the hardware RAID performs, which may be even >> better than the SSDs. >> _______________________________________________ >> CentOS mailing list >> CentOS at centos.org >> https://lists.centos.org/mailman/listinfo/centos >> > > > ++++++++++++++++++++++++++++++++++++++++ > Valeri Galtsev > Sr System Administrator > Department of Astronomy and Astrophysics > Kavli Institute for Cosmological Physics > University of Chicago > Phone: 773-702-4247 > ++++++++++++++++++++++++++++++++++++++++ > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos-- Stephen J Smoogen.
On Fri, September 8, 2017 11:07 am, Stephen John Smoogen wrote:> On 8 September 2017 at 11:00, Valeri Galtsev <galtsev at kicp.uchicago.edu> > wrote: >> >> On Fri, September 8, 2017 9:48 am, hw wrote: >>> m.roth at 5-cent.us wrote: >>>> hw wrote: >>>>> Mark Haney wrote: >>>> <snip> >>>>>> BTRFS isn't going to impact I/O any more significantly than, say, >>>>>> XFS. >>>>> >>>>> But mdadm does, the impact is severe. I know there are ppl saying >>>>> otherwise, but I????ve seen the impact myself, and I definitely >>>>> don????t >>>>> want >>>>> it on that particular server because it would likely interfere with >>>>> other services. >>>> <snip> >>>> I haven't really been following this thread, but if your requirements >>>> are >>>> that heavy, you're past the point that you need to spring some money >>>> and >>>> buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought >>>> them >>>> more recently? >>> >>> Heavy requirements are not required for the impact of md-RAID to be >>> noticeable. >>> >>> Hardware RAID is already in place, but the SSDs are "extra" and, as I >>> said, >>> not suited to be used with hardware RAID. >> >> Could someone, please, elaborate on the statement that "SSDs are not >> suitable for hardware RAID". >> > > It will depend on the type of SSD and the type of hardware RAID. There > are at least 4 different classes of SSD drives with different levels > of cache, write/read performance, number of lifetime writes, etc. > There are also multiple types of hardware RAID. A lot of hardware RAID > will try to even out disk usage in different ways. This means 'moving' > the heavily used data from slow parts to fast parts etc etc.Wow, you learn something every day ;-) Which hardware RAIDs do these moving of data (manufacturer/model, please - believe it or not I never heard of that ;-). And "slow part" and "fast part" of what are data being moved between? Thanks in advance for tutorial! Valeri> On an SSD > all these extra writes aren't needed and so if the hardware RAID > doesn't know about SSD technology it will wear out the SSD quickly. > Other hardware raid parts that can cause faster failures on SSD's are > where it does test writes all the time to see if disks are bad etc. > Again if you have gone with commodity SSD's this will wear out the > drive faster than expected and boom bad disks. > > That said, some hardware RAID's are supposedly made to work with SSD > drive technology. They don't do those extra writes, they also assume > that the disks underneath will read/write in near constant time so > queueing of data is done differently. However that stuff costs extra > money and not usually shipped in standard OEM hardware. > > >> Thanks. >> Valeri >> >>> >>> It remains to be tested how the hardware RAID performs, which may be >>> even >>> better than the SSDs. >>> _______________________________________________ >>> CentOS mailing list >>> CentOS at centos.org >>> https://lists.centos.org/mailman/listinfo/centos >>> >> >> >> ++++++++++++++++++++++++++++++++++++++++ >> Valeri Galtsev >> Sr System Administrator >> Department of Astronomy and Astrophysics >> Kavli Institute for Cosmological Physics >> University of Chicago >> Phone: 773-702-4247 >> ++++++++++++++++++++++++++++++++++++++++ >> _______________________________________________ >> CentOS mailing list >> CentOS at centos.org >> https://lists.centos.org/mailman/listinfo/centos > > > > -- > Stephen J Smoogen. > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos >++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
Valeri Galtsev wrote:> > On Fri, September 8, 2017 9:48 am, hw wrote: >> m.roth at 5-cent.us wrote: >>> hw wrote: >>>> Mark Haney wrote: >>> <snip> >>>>> BTRFS isn't going to impact I/O any more significantly than, say, XFS. >>>> >>>> But mdadm does, the impact is severe. I know there are ppl saying >>>> otherwise, but I??ve seen the impact myself, and I definitely don??t >>>> want >>>> it on that particular server because it would likely interfere with >>>> other services. >>> <snip> >>> I haven't really been following this thread, but if your requirements >>> are >>> that heavy, you're past the point that you need to spring some money and >>> buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought them >>> more recently? >> >> Heavy requirements are not required for the impact of md-RAID to be >> noticeable. >> >> Hardware RAID is already in place, but the SSDs are "extra" and, as I >> said, >> not suited to be used with hardware RAID. > > Could someone, please, elaborate on the statement that "SSDs are not > suitable for hardware RAID".When you search for it, you?ll find that besides wearing out undesirably fast --- which apparently can be contributed mostly to less overcommitment of the drive --- you may also experience degraded performance over time which can be worse than you would get with spinning disks, or at least not much better. Add to that the firmware being designed for an entirely different application and having bugs, and your experiences with surprisingly incompatible hardware, and you can imagine that using an SSD not designed for hardware RAID applications with hardware RAID is a bad idea. There is a difference like night and day between "consumer hardware" and hardware you can actually use, and that is not only the price you pay for it.
On Fri, September 8, 2017 12:56 pm, hw wrote:> Valeri Galtsev wrote: >> >> On Fri, September 8, 2017 9:48 am, hw wrote: >>> m.roth at 5-cent.us wrote: >>>> hw wrote: >>>>> Mark Haney wrote: >>>> <snip> >>>>>> BTRFS isn't going to impact I/O any more significantly than, say, >>>>>> XFS. >>>>> >>>>> But mdadm does, the impact is severe. I know there are ppl saying >>>>> otherwise, but I????ve seen the impact myself, and I definitely >>>>> don????t >>>>> want >>>>> it on that particular server because it would likely interfere with >>>>> other services. >>>> <snip> >>>> I haven't really been following this thread, but if your requirements >>>> are >>>> that heavy, you're past the point that you need to spring some money >>>> and >>>> buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought >>>> them >>>> more recently? >>> >>> Heavy requirements are not required for the impact of md-RAID to be >>> noticeable. >>> >>> Hardware RAID is already in place, but the SSDs are "extra" and, as I >>> said, >>> not suited to be used with hardware RAID. >> >> Could someone, please, elaborate on the statement that "SSDs are not >> suitable for hardware RAID". > > When you search for it, you??ll find that besides wearing out undesirably > fast --- which apparently can be contributed mostly to less overcommitment > of the drive --- you may also experience degraded performance over time > which can be worse than you would get with spinning disks, or at least not > much better.Thanks. That seems to clear fog a little bit. I still would like to hear manufacturers/models here. My choices would be: Areca or LSI (bought out by Intel, so former LSI chipset and microcode/firmware) and as SSD Samsung Evo SATA III. Does anyone who used these in hardware RAID can offer any bad experience description? I am kind of shying away from "crap" hardware which in a long run is more expensive, even though looks cheaper (Pricegrabber is your enemy - I would normally say to my users). So, I never would consider using poorly/cheaply designed hardware in some setup (e.g. hardware RAID based storage) one expects performance from. Am I still taking chance hitting "bad" hardware RAID + SSD combination? Just curious where we actually stand. Thanks again for fruitful discussion! Valeri> > Add to that the firmware being designed for an entirely different > application > and having bugs, and your experiences with surprisingly incompatible > hardware, > and you can imagine that using an SSD not designed for hardware RAID > applications with hardware RAID is a bad idea. There is a difference like > night and day between "consumer hardware" and hardware you can actually > use, > and that is not only the price you pay for it. > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos >++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++