Hello list. The next days we are going to install Centos 7 on a new server, with 4*3Tb sata hdd as raid-5. We will use the graphical interface to install and set up raid. Do I have to consider anything before installation, because the disks are very large? Does the graphical use the parted to set/format the raid? I hope the above make sense. Thank you in advance. Nikos
I have done this a couple of times successfully. I did set the boot partitions etc as RAID1 on sda and sdb. This I believe is an old instruction and was based on the fact that the kernel needed access to these partitions before RAID access was available. I'm sure someone more knowledgeable will be able to say whether this is still required. Gary On Thursday 27 June 2019 14:36:37 Nikos Gatsis - Qbit wrote:> Hello list. > > The next days we are going to install Centos 7 on a new server, with > 4*3Tb sata hdd as raid-5. We will use the graphical interface to install > and set up raid. > > Do I have to consider anything before installation, because the disks > are very large? > > Does the graphical use the parted to set/format the raid? > > I hope the above make sense. > > Thank you in advance. > > Nikos
I'd isolate all that RAID stuff from your OS, so the root, /boot, /usr, /etc /tmp, /bin swap are on "normal" partition(s). I know I'm missing some directories, but the point is you should be able to unmount that RAID stuff to adjust it without crippling your system. https://www.howtogeek.com/117435/htg-explains-the-linux-directory-structure-explained/ ?On 6/27/19, 9:37 AM, "CentOS on behalf of Nikos Gatsis - Qbit" <centos-bounces at centos.org on behalf of ngatsis at qbit.gr> wrote: Hello list. The next days we are going to install Centos 7 on a new server, with 4*3Tb sata hdd as raid-5. We will use the graphical interface to install and set up raid. Do I have to consider anything before installation, because the disks are very large? Does the graphical use the parted to set/format the raid? I hope the above make sense. Thank you in advance. Nikos _______________________________________________ CentOS mailing list CentOS at centos.org https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.centos.org_mailman_listinfo_centos&d=DwICAg&c=Ftw_YSVcGmqQBvrGwAZugGylNRkk-uER0-5bY94tjsc&r=Tou2GfskafF_UnC0yPjAjEzLDhbALx-0EDoLp3_iSss&m=XMmSvuGVugmPDRnUAysYbnYlooyLsmYpCfCHpHsL6g4&s=Rnnxkt7ZZq_LFVh2evLuCBlRvf7wij2ZN7wPZGDqa0A&e= This message contains information which may be confidential and privileged. Unless you are the intended recipient (or authorized to receive this message for the intended recipient), you may not use, copy, disseminate or disclose to anyone the message or any information contained in the message. If you have received the message in error, please advise the sender by reply e-mail, and delete the message. Thank you very much.
On Thu, 27 Jun 2019, Peda, Allan (NYC-GIS) wrote:> I'd isolate all that RAID stuff from your OS, so the root, /boot, /usr, /etc /tmp, /bin swap are on "normal" partition(s). I know I'm missing some directories, but the point is you should be able to unmount that RAID stuff to adjust it without crippling your system. > > https://www.howtogeek.com/117435/htg-explains-the-linux-directory-structure-explained/As long as you want none of the advantages of RAID to apply to your system as a whole. jh
Am 27.06.2019 um 15:36 schrieb Nikos Gatsis - Qbit:> Hello list. > > The next days we are going to install Centos 7 on a new server, with > 4*3Tb sata hdd as raid-5. We will use the graphical interface to install > and set up raid.You hopefully plan to use just 3 of the disks for the RAID 5 array and the 4th as a hotspare.> Do I have to consider anything before installation, because the disks > are very large? > > Does the graphical use the parted to set/format the raid?It does. See the RHEL 7 installation documentation.> I hope the above make sense. > > Thank you in advance. > > NikosAlexander
At Thu, 27 Jun 2019 14:48:30 +0100 CentOS mailing list <centos at centos.org> wrote:> > I have done this a couple of times successfully. >> I did set the boot partitions etc as RAID1 on sda and sdb. This I believe is > an old instruction and was based on the fact that the kernel needed access > to these partitions before RAID access was available.Actually *grub* needs access to /boot to load the kernel. I don't believe that grub can access (software) RAID filesystems. RAID1 is effectively an exception because it is just a mirror set and grub can [RO] access any one of the mirror set elements as a standalone disk. Note that UEFI partitions can't be RAID at all (and are FAT filesystems) and need to be accessable by the BIOS / boot EEPROM. Once the kernel starts, the raid array(s) can be started, then LVM volumes can be scanned for and set up, then the root file system mounted, and then the system is up and running -- all of that magic is handled in the initramfs. So the rule of thumb is a "small" /boot/efi FAT file system (if using UEFI boot) a /boot mirror set, and the rest whatever RAID logic, probably with LVM on top of that. Usually one creates a UEFI partition on both (or all three or more) disks -- they can't be a mirror set, but certainly can be rsync'ed regularly. Then a smallish mirror set for /boot, than whatever is left used for the main system filesystem: RAID whatever, etc.>> I'm sure someone more knowledgeable will be able to say whether this is > still required.Yes. See above.> > Gary > On Thursday 27 June 2019 14:36:37 Nikos Gatsis - Qbit wrote: > > Hello list. > > > > The next days we are going to install Centos 7 on a new server, with > > 4*3Tb sata hdd as raid-5. We will use the graphical interface to install > > and set up raid. > > > > Do I have to consider anything before installation, because the disks > > are very large? > > > > Does the graphical use the parted to set/format the raid? > > > > I hope the above make sense. > > > > Thank you in advance. > > > > Nikos > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos > >-- Robert Heller -- 978-544-6933 Deepwoods Software -- Custom Software Services http://www.deepsoft.com/ -- Linux Administration Services heller at deepsoft.com -- Webhosting Services
On 6/27/19 6:36 AM, Nikos Gatsis - Qbit wrote:> Do I have to consider anything before installation, because the disks > are very large?Probably not.? You'll need to use GPT because they're large, but for a new server you probably would need to do that anyway in order to boot under UEFI. The partition layout should be the same on all disks.? /boot and /boot/efi must be either RAID1 or regular partitions, rather than RAID5.
Le 27/06/2019 ? 15:36, Nikos Gatsis - Qbit a ?crit?:> Do I have to consider anything before installation, because the disks > are very large?I'm doing this kind of installation quite regularly. Here's my two cents. 1. Use RAID6 instead of RAID5. You'll lose a little space, but you'll gain quite some redundancy. 2. The initial sync will be very (!) long, something like a day or two. You can use your server during that time, but he'll not be very responsive. 3. Here's a neat little trick you can use to speed up the initial sync. $ sudo echo 50000 > /proc/sys/dev/raid/speed_limit_min I've written a detailed blog article about the kind of setup you want. It's in French, but the Linux bits are universal. https://www.microlinux.fr/serveur-lan-centos-7/ Cheers, Niki -- Microlinux - Solutions informatiques durables 7, place de l'?glise - 30730 Montpezat Site : https://www.microlinux.fr Mail : info at microlinux.fr T?l. : 04 66 63 10 32 Mob. : 06 51 80 12 12
If you can afford it I would prefer to use RAID10. You will loose half of disk space but you will get really faster system. It depends what you need / what you will use server for. Mirek 28.6.2019 at 7:01 Nicolas Kovacs:> Le 27/06/2019 ? 15:36, Nikos Gatsis - Qbit a ?crit?: >> Do I have to consider anything before installation, because the disks >> are very large? > I'm doing this kind of installation quite regularly. Here's my two cents. > > 1. Use RAID6 instead of RAID5. You'll lose a little space, but you'll > gain quite some redundancy. > > 2. The initial sync will be very (!) long, something like a day or two. > You can use your server during that time, but he'll not be very responsive. > > 3. Here's a neat little trick you can use to speed up the initial sync. > > $ sudo echo 50000 > /proc/sys/dev/raid/speed_limit_min > > I've written a detailed blog article about the kind of setup you want. > It's in French, but the Linux bits are universal. > > https://www.microlinux.fr/serveur-lan-centos-7/ > > Cheers, > > Niki >
Thank you all for your answers. Nikos. On 27/6/2019 4:48 ?.?., Gary Stainburn wrote:> I have done this a couple of times successfully. > > I did set the boot partitions etc as RAID1 on sda and sdb. This I believe is an old instruction and was based on the fact that the kernel needed access to these partitions before RAID access was available. > > I'm sure someone more knowledgeable will be able to say whether this is still required. > > Gary > On Thursday 27 June 2019 14:36:37 Nikos Gatsis - Qbit wrote: >> Hello list. >> >> The next days we are going to install Centos 7 on a new server, with >> 4*3Tb sata hdd as raid-5. We will use the graphical interface to install >> and set up raid. >> >> Do I have to consider anything before installation, because the disks >> are very large? >> >> Does the graphical use the parted to set/format the raid? >> >> I hope the above make sense. >> >> Thank you in advance. >> >> Nikos
On Fri, Jun 28, 2019 at 07:01:00AM +0200, Nicolas Kovacs wrote:> 3. Here's a neat little trick you can use to speed up the initial sync. > > $ sudo echo 50000 > /proc/sys/dev/raid/speed_limit_min > > I've written a detailed blog article about the kind of setup you want. > It's in French, but the Linux bits are universal. > > https://www.microlinux.fr/serveur-lan-centos-7/You can't have actually tested these instructions if you think 'sudo echo > /path' actually works. The idiom for this is typically: echo 50000 | sudo tee /proc/sys/dev/raid/speed_limit_min -- Jonathan Billings <billings at negate.org>
Nikos Gatsis - Qbit wrote on 6/27/2019 8:36 AM:> Hello list. > > The next days we are going to install Centos 7 on a new server, with > 4*3Tb sata hdd as raid-5. We will use the graphical interface to > install and set up raid. > > Do I have to consider anything before installation, because the disks > are very large? > > Does the graphical use the parted to set/format the raid? >Hi Nikos, I've read the other posts in this thread and wanted to provide my perspective. I've used Linux RAID at various times over the past 10-20 years with both desktop and server class hardware. I've also used hardware RAID controllers from 3ware, Adaptec, LSI, AMI, and others with IDE, SATA, SAS, and SCSI drives. The goal of RAID 1 and above is to increase availability. Unfortunately, I've never had Linux software RAID improve availability - it has only decreased availability for me. This has been due to a combination of hardware and software issues that are are generally handled well by HW RAID controllers, but are often handled poorly or unpredictably by desktop oriented hardware and Linux software. Given that Linux software RAID does not achieve the goal of RAID (improved availability), my recommendation would be to avoid it. If you are looking for a backup mechanism, RAID is not it (use a backup program instead). If you do need high availability, my recommendation is to purchase an LSI based RAID controller. If you plan to use RAID 5, make sure the model you choose has a write cache (this could double the cost of the controller). Used IBM, HP, or Dell RAID controllers are available for a reasonable price or you can purchase a new one from Newegg or wherever. SAS RAID controllers will work with either SAS or SATA drives and you can purchase the appropriate breakout cables for connecting the controller to individual drives. Since you're planning on using 3TB+ drives that are likely 4k native sector, I'd recommend a newer model controller like the Dell PERC H730 (LSI MegaRAID SAS 9361-8i) for RAID5/6 or a PERC H330 (LSI MegaRAID SAS 9341-8i) for RAID 0/1/10.
Am 28.06.2019 um 16:46 schrieb Blake Hudson <blake at ispn.net>:> > Nikos Gatsis - Qbit wrote on 6/27/2019 8:36 AM: >> Hello list. >> >> The next days we are going to install Centos 7 on a new server, with 4*3Tb sata hdd as raid-5. We will use the graphical interface to install and set up raid. >> >> Do I have to consider anything before installation, because the disks are very large? >> >> Does the graphical use the parted to set/format the raid? >> > > Hi Nikos, I've read the other posts in this thread and wanted to provide my perspective. I've used Linux RAID at various times over the past 10-20 years with both desktop and server class hardware. I've also used hardware RAID controllers from 3ware, Adaptec, LSI, AMI, and others with IDE, SATA, SAS, and SCSI drives. The goal of RAID 1 and above is to increase availability. Unfortunately, I've never had Linux software RAID improve availability - it has only decreased availability for me. This has been due to a combination of hardware and software issues that are are generally handled well by HW RAID controllers, but are often handled poorly or unpredictably by desktop oriented hardware and Linux software. > > Given that Linux software RAID does not achieve the goal of RAID (improved availability), my recommendation would be to avoid it. If you are looking for a backup mechanism, RAID is not it (use a backup program instead). If you do need high availability, my recommendation is to purchase an LSI based RAID controller. If you plan to use RAID 5, make sure the model you choose has a write cache (this could double the cost of the controller). Used IBM, HP, or Dell RAID controllers are available for a reasonable price or you can purchase a new one from Newegg or wherever. SAS RAID controllers will work with either SAS or SATA drives and you can purchase the appropriate breakout cables for connecting the controller to individual drives. Since you're planning on using 3TB+ drives that are likely 4k native sector, I'd recommend a newer model controller like the Dell PERC H730 (LSI MegaRAID SAS 9361-8i) for RAID5/6 or a PERC H330 (LSI MegaRAID SAS 9341-8i) for RAID 0/1/10. >We have good experiences with MD RAID (Linux software RAID) - for having data redundancy at low cost. For availability we use clustering (different hardware level) ... -- LF
On 29/06/19 2:46 AM, Blake Hudson wrote:> > Nikos Gatsis - Qbit wrote on 6/27/2019 8:36 AM: >> Hello list. >> >> The next days we are going to install Centos 7 on a new server, with >> 4*3Tb sata hdd as raid-5. We will use the graphical interface to >> install and set up raid. >> >> Do I have to consider anything before installation, because the disks >> are very large? >> >> Does the graphical use the parted to set/format the raid? >> > > Hi Nikos, I've read the other posts in this thread and wanted to > provide my perspective. I've used Linux RAID at various times over the > past 10-20 years with both desktop and server class hardware. I've > also used hardware RAID controllers from 3ware, Adaptec, LSI, AMI, and > others with IDE, SATA, SAS, and SCSI drives. The goal of RAID 1 and > above is to increase availability. Unfortunately, I've never had Linux > software RAID improve availability - it has only decreased > availability for me. This has been due to a combination of hardware > and software issues that are are generally handled well by HW RAID > controllers, but are often handled poorly or unpredictably by desktop > oriented hardware and Linux software. >Sorry for your poor experience. I have used and achieved much improved availability by using Linux Software RAID - most often I use RAID 1 and had disks fail with no impact to the client other than slightly reduced response times (in fact they were totally unaware that a drive had failed, until I told them). Replaced the faulty drive (done by a local person who barely knew how to use a screw driver), resynchronized and all is well - zero data lost. It was a hot swap bay and thus the server did not even have to be powered down - zero customer noticed impact - 100% availability.> Given that Linux software RAID does not achieve the goal of RAID > (improved availability), my recommendation would be to avoid it. If > you are looking for a backup mechanism, RAID is not it (use a backup > program instead). If you do need high availability, my recommendation > is to purchase an LSI based RAID controller. If you plan to use RAID > 5, make sure the model you choose has a write cache (this could double > the cost of the controller). Used IBM, HP, or Dell RAID controllers > are available for a reasonable price or you can purchase a new one > from Newegg or wherever. SAS RAID controllers will work with either > SAS or SATA drives and you can purchase the appropriate breakout > cables for connecting the controller to individual drives. Since > you're planning on using 3TB+ drives that are likely 4k native sector, > I'd recommend a newer model controller like the Dell PERC H730 (LSI > MegaRAID SAS 9361-8i) for RAID5/6 or a PERC H330 (LSI MegaRAID SAS > 9341-8i) for RAID 0/1/10. > > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos
On 6/28/19 4:46 PM, Blake Hudson wrote:> > Unfortunately, I've never had Linux software RAID improve availability - it has only decreased availability for me. This has been due to a combination of hardware and software issues that are are generally handled well by HW RAID controllers, but are often handled poorly or unpredictably by desktop oriented hardware and Linux software.I have to add my data point, and it is an opposite experience. Software RAID1 and RAID5 (and RAID10) have done their job perfectly for me with disk failing and being replaced without issues; neither the resync is a too noticeable speed degradation. On the other hand, hardware RAID boards have always been a disaster. Slow and ridiculous BIOS utilities, drives being pushed out of the array randomly, SMART data not available anymore and undocumented "formatting" headers on drives so good luck finding an identical controller when the board dies (yeah, with a battery onboard, not the best component for years of reliability...). It is always software RAID for me. Software RAID + LVM on top is great. For example: RAID1 with sda1 sdb1 sdc1 sdd1 for /boot (yes, 4 disk RAID1, and have a look at "mdadm -e" to have it bootable without the bootloader even knowing it is a RAID), then sd{a,b,c,d}{2,3,4,5,6,...} partitions of reasonable sizes (e.g. 500GB), composed as you prefer, such as RAID1 between sda2-sdb2, RAID1 between sdc2-sdd2, RAID5 between sda3-sdb3-sdc3-sdd4, RAID5 between sda4-sdb4-sdc4-sdd4, ... then pvcreate on the RAID assemblies to place your vgs and lvs. Any movement/enlargement of filesystem will be easy thanks to LVM. Any drive failure will be easy thanks to the Software RAID. You can basically never need to turn off the system anymore. Regards. -- Roberto Ragusa mail at robertoragusa.it
On Jun 28, 2019, at 8:46 AM, Blake Hudson <blake at ispn.net> wrote:> > Linux software RAID?has only decreased availability for me. This has been due to a combination of hardware and software issues that are are generally handled well by HW RAID controllers, but are often handled poorly or unpredictably by desktop oriented hardware and Linux software.Would you care to be more specific? I have little experience with software RAID, other than ZFS, so I don?t know what these ?issues? might be. I do have a lot of experience with hardware RAID, and the grass isn?t very green on that side of the fence, either. Some of this will repeat others? points, but it?s worth repeating, since it means they?re not alone in their pain: 0. Hardware RAID is a product of the time it was produced. My old parallel IDE and SCSI RAID cards are useless because you can?t get disks with that port type any more; my oldest SATA and SAS RAID cards can?t talk to disks bigger than 2 TB; and of those older hardware RAID cards that still do work, they won?t accept a RAID created by a controller of another type, even if it?s from the same company. (Try attaching a 3ware 8000-series RAID to a 3ware 9000-series card, for example.) Typical software RAID never drops backwards compatibility. You can always attach an old array to new hardware. Or even new arrays to old hardware, within the limitations of the hardware, and those limitations aren?t the software RAID?s fault. 1. Hardware RAID requires hardware-specific utilities. Many hardware RAID systems don?t work under Linux at all, and of of those that do, not all provide sufficiently useful Linux-side utilities. If you have to reboot into the RAID BIOS to fix anything, that?s bad for availability. 2. The number of hardware RAID options is going down over time. Adaptec?s almost out of the game, 3ware was bought by LSI and then had their products all but discontinued, and most of the other options you list are rebadged LSI or Adaptec. Eventually it?s going to be LSI or software RAID, and then LSI will probably get out of the game, too. This market segment is dying because software RAID no longer has any practical limitations that hardware can fix. 3. When you do get good-enough Linux-side utilities, they?re often not well-designed. I don?t know anyone who likes the megaraid or megacli64 utilities. I have more experience with 3ware?s tw_cli, and I never developed facility with it beyond pidgin, so that to do anything even slightly uncommon, I have to go back to the manual to piece the command together, else risk roaching the still-working disks. By contrast, I find the zfs and zpool commands well-designed and easy to use. There?s no mystery why that should be so: hardware RAID companies have their expertise in hardware, not software. Also, ?man zpool? doesn?t suck. :) That coin does have an obverse face, which is that young software RAID systems go through a phase where they have to re-learn just how false, untrustworthy, unreliable, duplicitous, and mendacious the underlying hardware can be. But that expertise builds up over time, so that a mature software RAID system copes quite well with the underlying hardware?s failings. The inverse expertise in software design doesn?t build up on the hardware RAID side. I assume this is because they fire the software teams once they?ve produced a minimum viable product, then re-hire a new team when their old utilities and monitoring software gets so creaky that it has to be rebuilt from scratch. Then you get a *new* bag of ugliness in the world. Software RAID systems, by contrast, evolve continuously, and so usually tend towards perfection. The same problem *can* come up in the software RAID world: witness how much wheel reinvention is going on in the Stratis project! The same amount of effort put into ZFS would have been a better use of everyone?s time. That option doesn?t even exist on the hardware RAID side, though. Every hardware RAID provider must develop their command line utilities and monitoring software de novo, because even if the Other Company open-sourced its software, that other software can?t work with their proprietary hardware. 4. Because hardware RAID is abstracted below the OS layer, the OS and filesystem have no way to interact intelligently with it. ZFS is at the pinnacle of this technology here, but CentOS is finally starting to get this through Stratis and the extensions Stratis has required to XFS and LVM. I assume btrfs also provides some of these benefits, though that?s on track to becoming off-topic here. ZFS can tell you which file is affected by a block that?s bad across enough disks that redundancy can?t fix it. This gives you a new, efficient, recovery option: restore that file from backup or delete it, allowing the underlying filesystem to rewrite the bad block on all disks. With hardware RAID, fixing this requires picking one disk as the ?real? copy and telling the RAID card to blindly rewrite all the other copies. Another example is resilvering: because a hardware RAID has no knowledge of the filesystem, a resilver during disk replacement requires rewriting the entire disk, which takes 8-12 hours these days. If the volume has a lot of free space, a filesystem-aware software RAID resilver can copy only the blocks containing user data, greatly reducing recovery time. Anecdotally, I can tell you that the ECCs involved in NAS-grade SATA hardware aren?t good enough on their own. We had a ZFS server that would detect about 4-10 kB of bad data on one disk in the pool during every weekend scrub. We never figured out whether the problem was in the disk, its drive cage slot, or its cabling, but it was utterly repeatable. But also utterly unimportant to diagnose, because ZFS kept fixing the problem for us, automatically! The thing is, we?d have never known about this underlying hardware fault if ZFS?s 128-bit checksums weren?t able to reduce the chances of undetected error to practically-impossible levels. Since ZFS knows, by those same 128-bit hashes, which copy of the data is uncorrupted, it fixed it automatically for us each time for years on end. I doubt any hardware RAID system you favor would have fared as well. *That?s* uptime. :) 5. Hardware RAID made sense back when a PC motherboard rarely had more than 2 hard disk controller ports, and those were shared a single IDE lane. In those days, CPUs were slow enough that calculating parity was really costly, and hard drives were small enough that 8+ disk arrays were often required just to get enough space. Now that you can get 10+ SATA ports on a mobo, parity calculation costs only a tiny slice of a single core in your multicore CPU, and a mirrored pair of multi-terabyte disks is often plenty of space, hardware RAID is increasingly being pushed to the margins of the server world. Software RAID doesn?t have port count limits at all. With hardware RAID, I don?t buy a 4-port card when a 2-port card will do, because that costs me $100-200 more. With software RAID, I can usually find another place to plug in a drive temporarily, and that port was ?free? because it came with the PC. This matters when I have to replace a disk in my hardware RAID mirror, because now I?m out of ports. I have to choose one of the disks to drop out of the array, losing all redundancy before the recovery even starts, because I need to free up one of the two hardware connectors for the new disk. That?s fine when the disk I?m replacing is dead, dead, dead, but that isn?t usually the case in my experience. Instead, the disk I?m replacing is merely *dying*, and I?m hoping to get it replaced before it finally dies. What that means in practice is that with software RAID, I can have an internal mirror, then temporarily connect a replacement drive in a USB or Thunderbolt disk enclosure. Now the resilver operation proceeds with both original disks available, so that if we find that the ?good? disk in the original mirror has a bad sector, too, the software RAID system might find that it can pull a good copy from the ?bad? disk, saving the whole operation. Only once the resilver is complete do I have to choose which disk to drop out of the array in a software RAID system. If I choose incorrectly, the software RAID stops work and lets me choose again. With hardware RAID, if I choose incorrectly, it?s on the front end of the operation instead, so I?ll end up spending 8-12 hours to create a redundant copy of ?Wrong!? Bottom line: I will not shed a tear when my last hardware RAID goes away.