Once upon a time, Simon Matter <simon.matter at invoca.ch> said:> Are you sure that's still true? I've done it that way in the past but it > seems at least with EL8 you can put /boot/efi on md raid1 with metadata > format 1.0. That way the EFI firmware will see it as two independent FAT > filesystems. Only thing you have to be sure is that nothing ever writes to > these filesystems when Linux is not running, otherwise your /boot/efi md > raid will become corrupt. > > Can someone who has this running confirm that it works?Yes, that's even how RHEL/Fedora set it up currently I believe. But like you say, it only works as long as there's no other OS on the system and the UEFI firmware itself is never used to change anything on the FS. It's not entirely clear that most UEFI firmwares would handle a drive failure correctly either (since it's outside the scope of UEFI), so IIRC there's been some consideration in Fedora of dropping this support. And... I'm not sure if GRUB2 handles RAID 1 /boot fully correctly, for things where it writes to the FS (grubenv updates for "savedefault" for example). But, there's other issues with GRUB2's FS handling anyway, so this case is probably far down the list. I think that having RAID 1 for /boot and/or /boot/efi can be helpful (and I've set it up, definitely not saying "don't do that"), but has to be handled with care and possibly (probably?) would need manual intervention to get booting again after a drive failure or replacement. -- Chris Adams <linux at cmadams.net>
Continuing this thread, and focusing on RAID1. I got an HPE Proliant gen10+ that has hardware RAID support.? (can turn it off if I want). I am planning two groupings of RAID1 (it has 4 bays). There is also an internal USB boot port. So I am really a newbie in working with RAID.? From this thread it sounds like I want /boot and /boot/efi on that USBVV boot device. Will it work to put / on the first RAID group?? What happens if the 1st drive fails and it is replaced with a new blank drive.? Will the config in /boot figure this out or does the RAID hardware completely mask the 2 drives and the system runs on the good one while the new one is being replicated? I also don't see how to build that boot USB stick.? I will have the install ISO in the boot USB port and the 4 drives set up with hardware RAID.? How are things figure out?? I am missing some important piece here. Oh, HP does list Redhat support for this unit. thanks for all help. Bob On 1/6/23 11:45, Chris Adams wrote:> Once upon a time, Simon Matter <simon.matter at invoca.ch> said: >> Are you sure that's still true? I've done it that way in the past but it >> seems at least with EL8 you can put /boot/efi on md raid1 with metadata >> format 1.0. That way the EFI firmware will see it as two independent FAT >> filesystems. Only thing you have to be sure is that nothing ever writes to >> these filesystems when Linux is not running, otherwise your /boot/efi md >> raid will become corrupt. >> >> Can someone who has this running confirm that it works? > Yes, that's even how RHEL/Fedora set it up currently I believe. But > like you say, it only works as long as there's no other OS on the system > and the UEFI firmware itself is never used to change anything on the FS. > It's not entirely clear that most UEFI firmwares would handle a drive > failure correctly either (since it's outside the scope of UEFI), so IIRC > there's been some consideration in Fedora of dropping this support. > > And... I'm not sure if GRUB2 handles RAID 1 /boot fully correctly, for > things where it writes to the FS (grubenv updates for "savedefault" for > example). But, there's other issues with GRUB2's FS handling anyway, so > this case is probably far down the list. > > I think that having RAID 1 for /boot and/or /boot/efi can be helpful > (and I've set it up, definitely not saying "don't do that"), but has to > be handled with care and possibly (probably?) would need manual > intervention to get booting again after a drive failure or replacement. >
Hi All, very interesting thread, I add my 2 cents point-of-view for free to
all of you ...
A lot af satisfaction with HP Proliant MicroServer from the first GEN6 (AMD
NEON) to the 1-year old MicroServer Gen10 X3216 (CentOS6/7/8) so I think
yours is the right choice!
In /boot/efi/ (mounted from the first partition of the first GPT disk) you
only have the grub2 efi binary, not the vmlinuz kernel or initrd image or
the grub.cfg itself ...
To be more precise a grub.cfg file exists there but it's only a static file
which has an entry to find the right one using the uuid fingerprint
*cat \EFI\ubuntu\grub.cfg*search.fs_uuid
d9f44ffb-3cb8-4783-8928-0123e5d8a149 root
set prefix=($root)'/@/boot/grub'
configfile $prefix/grub.cfg
Using an md1 software raid mirror for this FAT32 (ESP) partition is not
safe IF you use it outside of the linux environment (because the mirror
will became corrupted at the first write the other OSes will do on this
partition).
It's better to setup a separated /boot partition (yes, here an md1 linux
software raid mirror is OK) which the grub2 bootloader can manage correctly
(be sure grub2 can access his modules to understand and manage this
LVM/RAID : mdraid09,mdraid1x,lvm.mod [1] [2]
insmod raid
# and load the related `mdraid' module `mdraid09' for RAID arrays
with version 0.9 metadata, and `mdraid1x' for arrays with version 1.x
metadata.
insmod mdraid09
set root=(md0p1)
# or the following for an unpartitioned RAID array
set root=(md0)
IMHO installing ex-novo is the easiest path with setup that puts all the
things correctly, building the right initramfs and putting the correct
entry in grub.cfg for the modules needed to manage raid/lvm...
To be honest I don't know how the anaconda installer manage the /dev/sda1
ESP/FAT32/EFI partitions (I'd like it clones this efi partition to the 2nd
disk, but i think it will leave /dev/sdb1 partition empty)
To understand better how GRUB2 works i've looked here : [3] [4] [5]
Happy hacking
*Fleur*
[1] : https://unix.stackexchange.com/questions/187236/grub2-lvm2-raid1-boot
[2] : https://wiki.gentoo.org/wiki/GRUB/Advanced_storage
[3] : https://www.gnu.org/software/grub/manual/grub/grub.html
[4] :
https://documentation.suse.com/sled/15-SP4/html/SLED-all/cha-grub2.html
[5] : https://wiki.archlinux.org/title/GRUB