hi, I am planning to replace my old CentOS 6 mail server soon. Most details are quite obvious and do not need to be changed, but the old system was running on spinning discs and this is certainly not the best option for todays mail servers. With spinning discs, HW-RAID6 was the way to go to increase reliability and speed. Today, I get the feeling, that traditional RAID is not the best option for SSDs. I am reading that all RAID members in SSD-arrays age synchronously so that the risk of a massive failure of more than one disk is more likely than with HDDs. There are many other concerns like excessive write load compared to non-raid systems, etc. Is there any common sense what disk layout should be used these days? I have been looking for some kind of master-slave system, where the (one or many) SSD is taking all writes and reads, but the slave HDD runs in parallel as a backup system like in a RAID1 system. Is there any such system? Any thoughts? best regards Michael Schumacher
On Wed, 16 Sep 2020 at 12:12, Michael Schumacher < michael.schumacher at pamas.de> wrote:> hi, > > I am planning to replace my old CentOS 6 mail server soon. Most details > are quite obvious and do not need to be changed, but the old system > was running on spinning discs and this is certainly not the best > option for todays mail servers. > > With spinning discs, HW-RAID6 was the way to go to increase reliability > and speed. > Today, I get the feeling, that traditional RAID is not the best > option for SSDs. I am reading that all RAID members in SSD-arrays age > synchronously so that the risk of a massive failure of more than one > disk is more likely than with HDDs. There are many other concerns like > excessive write load compared to non-raid systems, etc. > > Is there any common sense what disk layout should be used these days? > > I have been looking for some kind of master-slave system, where the > (one or many) SSD is taking all writes and reads, but the slave HDD > runs in parallel as a backup system like in a RAID1 system. Is there > any such system? > > I don't think so because the drives would always be out of sync but in arestart it would be hard to know if the drive is out of sync for a good reason or a bad one. For most of the SSD raids, I have seen people just making sure to buy disks which are spec'd for more writes or similar 'smarter' enterprise trim. I have also read about the synchronicity problem but I think this may be a theory vs reality problem. In theory they should all fail at once, in reality at least for the arrays I have used for 3 years, they seem to fail in different times. that said, I only have 3 systems over 3 years with SSD drives running RAID6 so I only have anecdata versus data.> Any thoughts? > > best regards > Michael Schumacher > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos >-- Stephen J Smoogen.
Hi Michael, With SSD's, no matter what storage technology is used, you pay your money and you take your choice. The more expensive SSD's have higher I/O rates, higher data bandwidth and better durability. I would go for NVMe as this gives a higher data rate with PCIe 3.0 and PCIe 4.0 (twice the data rate) ones are just coming in to the market. I believe that traditional Raid 5 and 6 are not required for SSD's I have configured all my customer SSD subsystems for Raid 1 (mirror), reduced overhead. Cost defines if the above is acceptable. Also, do you use hardware or software Raid 1. There are many other questions but the above is a start. Regards, Mark Woolfson MW Consultancy Ltd Leeds LS18 4LY United Kingdom Tel: +44 113 259 1204 Mob: +44 786 065 2778 -----Original Message----- From: Michael Schumacher Sent: Wednesday, September 16, 2020 5:11 PM To: CentOS mailing list Subject: [CentOS] storage for mailserver hi, I am planning to replace my old CentOS 6 mail server soon. Most details are quite obvious and do not need to be changed, but the old system was running on spinning discs and this is certainly not the best option for todays mail servers. With spinning discs, HW-RAID6 was the way to go to increase reliability and speed. Today, I get the feeling, that traditional RAID is not the best option for SSDs. I am reading that all RAID members in SSD-arrays age synchronously so that the risk of a massive failure of more than one disk is more likely than with HDDs. There are many other concerns like excessive write load compared to non-raid systems, etc. Is there any common sense what disk layout should be used these days? I have been looking for some kind of master-slave system, where the (one or many) SSD is taking all writes and reads, but the slave HDD runs in parallel as a backup system like in a RAID1 system. Is there any such system? Any thoughts? best regards Michael Schumacher _______________________________________________ CentOS mailing list CentOS at centos.org https://lists.centos.org/mailman/listinfo/centos
Hi Michael, RAID 1 is not uncommon with SSDs (be them SATA/SAS/NVMe). RAID 5/6 wear SSD drives more so are generally best avoided. You really need to monitor your SSDs health to help avoid failures. And obviously always have your backups... -yoctozepto On Wed, Sep 16, 2020 at 6:12 PM Michael Schumacher <michael.schumacher at pamas.de> wrote:> > hi, > > I am planning to replace my old CentOS 6 mail server soon. Most details > are quite obvious and do not need to be changed, but the old system > was running on spinning discs and this is certainly not the best > option for todays mail servers. > > With spinning discs, HW-RAID6 was the way to go to increase reliability > and speed. > Today, I get the feeling, that traditional RAID is not the best > option for SSDs. I am reading that all RAID members in SSD-arrays age > synchronously so that the risk of a massive failure of more than one > disk is more likely than with HDDs. There are many other concerns like > excessive write load compared to non-raid systems, etc. > > Is there any common sense what disk layout should be used these days? > > I have been looking for some kind of master-slave system, where the > (one or many) SSD is taking all writes and reads, but the slave HDD > runs in parallel as a backup system like in a RAID1 system. Is there > any such system? > > Any thoughts? > > best regards > Michael Schumacher > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos
On 2020-09-16 11:26, Stephen John Smoogen wrote:> On Wed, 16 Sep 2020 at 12:12, Michael Schumacher < > michael.schumacher at pamas.de> wrote: > >> hi, >> >> I am planning to replace my old CentOS 6 mail server soon. Most details >> are quite obvious and do not need to be changed, but the old system >> was running on spinning discs and this is certainly not the best >> option for todays mail servers. >> >> With spinning discs, HW-RAID6 was the way to go to increase reliability >> and speed. >> Today, I get the feeling, that traditional RAID is not the best >> option for SSDs. I am reading that all RAID members in SSD-arrays age >> synchronously so that the risk of a massive failure of more than one >> disk is more likely than with HDDs. There are many other concerns like >> excessive write load compared to non-raid systems, etc. >> >> Is there any common sense what disk layout should be used these days? >> >> I have been looking for some kind of master-slave system, where the >> (one or many) SSD is taking all writes and reads, but the slave HDD >> runs in parallel as a backup system like in a RAID1 system. Is there >> any such system? >> >> I don't think so because the drives would always be out of sync but in a > restart it would be hard to know if the drive is out of sync for a good > reason or a bad one. For most of the SSD raids, I have seen people just > making sure to buy disks which are spec'd for more writes or similar > 'smarter' enterprise trim. I have also read about the synchronicity problem > but I think this may be a theory vs reality problem. In theory they should > all fail at once, in reality at least for the arrays I have used for 3 > years, they seem to fail in different times. that said, I only have 3 > systems over 3 years with SSD drives running RAID6 so I only have anecdata > versus data. >I fully agree about synchronous failure of SSDs in RAID to be made up or grossly overrated. SSD failure _probablity_ is increased with number of write operations (into the same area). Failure still has stochastic nature. If SSD is spec'ed for N number of writes, it doesn't mean on the write N+1 SSD will fail. It only means that after N number of writes failure probability is below [some acceptable value], which, however is much higher of that of unused SSD. That said, single SSD failure probability after long run is some small value, say q. Event of failure of another SSD is independent event from failure of first failed SSD (even though their probabilities q both increase with number of writes) hence probability of failures are: one SSD failed: q two SSDs failed: (q)^2 three SSDs failed: (q)^3 thus multi-failures (say, within some period of time, say 1 day, or 1 week) still are way less probable events than single failure. The following numbers have nothing to do with probability of failure of some devices, it is just an illustration, so: if q = 10 ^ (-10) (ten to the minus 10th power), then (q)^2 = 10 ^ (-20) (q)^3 = 10 ^ (-30) My apologies for saying trivial things, they just give IMHO a feeling of what to take into consideration, and what to ignore safely. And no, I don't intend to start flame war on views of statistics, or on hardware vs software RAIDs, or RAIDs vs zfs. Just think it over and draw your own conclusions. Valeri> > > >> Any thoughts? >> >> best regards >> Michael Schumacher >> >> _______________________________________________ >> CentOS mailing list >> CentOS at centos.org >> https://lists.centos.org/mailman/listinfo/centos >> > >-- ++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On 16/09/2020 17:11, Michael Schumacher wrote:> hi, > > I am planning to replace my old CentOS 6 mail server soon. Most details > are quite obvious and do not need to be changed, but the old system > was running on spinning discs and this is certainly not the best > option for todays mail servers. > > With spinning discs, HW-RAID6 was the way to go to increase reliability > and speed. > Today, I get the feeling, that traditional RAID is not the best > option for SSDs. I am reading that all RAID members in SSD-arrays age > synchronously so that the risk of a massive failure of more than one > disk is more likely than with HDDs. There are many other concerns like > excessive write load compared to non-raid systems, etc. > > Is there any common sense what disk layout should be used these days? > > I have been looking for some kind of master-slave system, where the > (one or many) SSD is taking all writes and reads, but the slave HDD > runs in parallel as a backup system like in a RAID1 system. Is there > any such system? > > Any thoughts? >You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and marking the HDD members as --write-mostly, meaning most of the reads will come from the faster SSDs retaining much of the speed advantage, but you have the redundancy of both SSDs and HDDs in the array. Read performance is not far off native write performance of the SSD, and writes mostly cached / happen in the background so are not so noticeable on a mail server anyway. I kind of stumbled across this setup by accident when I added an NVMe SSD to an existing RIAD1 array consisting of 2 HDDs. # cat /proc/mdstat Personalities : [raid1] md127 : active raid1 sda1[2](W) sdb1[4](W) nvme0n1p1[3] 485495616 blocks super 1.0 [3/3] [UUU] bitmap: 3/4 pages [12KB], 65536KB chunk See how we have 3 devices in the above RAID1 array, 2 x HDDs, marked with a (W) indicating they are in --write-mostly mode, and one SSD (MVMe) device. I just went for 3 devices in the array as it started life as a 2 x HDD array and I added the third SSD device, but you can mix and match to suit your needs. See the following article which may be helpful or search 'mdadm write-mostly' for more info. https://www.tansi.org/hybrid/
Hello Phil, Wednesday, September 16, 2020, 7:40:24 PM, you wrote: PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and PP> marking the HDD members as --write-mostly, meaning most of the reads PP> will come from the faster SSDs retaining much of the speed advantage, PP> but you have the redundancy of both SSDs and HDDs in the array. PP> Read performance is not far off native write performance of the SSD, and PP> writes mostly cached / happen in the background so are not so noticeable PP> on a mail server anyway. very interesting. Do you or anybody else have experience with this setup? Any test results to compare? I will do some testing if nobody can come up with comparisons. best regards --- Michael Schumacher
On 9/16/20 10:40 AM, Phil Perry wrote:> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and > marking the HDD members as --write-mostly, meaning most of the reads > will come from the faster SSDs retaining much of the speed advantage, > but you have the redundancy of both SSDs and HDDs in the array.Was the write-behind crash bug ever actually fixed?? I don't see it in more recent release notes, but the bug listed isn't public, so I can't check its status. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.6_release_notes/known_issues_kernel