On 1/10/23 20:20, Robert Moskowitz wrote:> Official drives should be here Friday, so trying to get reading.
>
>
>
> On 1/9/23 01:32, Simon Matter wrote:
>> Hi
>>
>>> Continuing this thread, and focusing on RAID1.
>>>
>>> I got an HPE Proliant gen10+ that has hardware RAID support. (can
turn
>>> it off if I want).
>> What exact model of RAID controller is this? If it's a S100i SR
Gen10
>> then
>> it's not hardware RAID at all.
>
> Yes, I found the information:
> ===========================> HPE Smart Array Gen10 Controllers Data
Sheet.
>
> Software RAID
>
> ? HPE Smart Array S100i SR Gen10 Software RAID
>
> Notes:
>
> - HPE Smart Array S100i SR Gen10 SW RAID will operate in UEFI mode
> only. For legacy support an additional controller will be needed
>
> - The S100i only supports Windows. For Linux users, HPE offers a
> solution that uses in-distro open-source software to create a two-disk
> RAID 1 boot volume. For more information visit:
> https://downloads.linux.hpe.com/SDR/project/lsrrb/
> ===================> I have yet to look at this url.
This guide seems to answer MOST of my questions.
>
>
>>> I am planning two groupings of RAID1 (it has 4 bays).
>>>
>>> There is also an internal USB boot port.
>>>
>>> So I am really a newbie in working with RAID.? From this thread it
>>> sounds like I want /boot and /boot/efi on that USBVV boot device.
>> I suggest to use the USB device only to bot the installation medium,
not
>> use it for anything used by the OS.
>>
>>> Will it work to put / on the first RAID group?? What happens if the
1st
>>> drive fails and it is replaced with a new blank drive.? Will the
config
>>> in /boot figure this out or does the RAID hardware completely mask
>>> the 2
>>> drives and the system runs on the good one while the new one is
being
>>> replicated?
>
> I am trying to grok what you are saying here.? is MD0-4 the physical
> disks or partitions?
I see from your response to another poster you ARE talking about RAID on
individual partitions.? So I can better think about your approach now.
thanks
>
> All the drives I am getting are 4TB, as that is the smallest
> Enterprise quality HD I could find!? Quite overkill for me, $75 each.
>
>> I guess the best thing would be to use Linux Software RAID and create a
>> small RAID1 (MD0) device for /boot and another one for /boot/efi (MD1),
>
> Here is sounds like MD0 and MD1 are partitions, not physical drives?
>
>> both in the beginning of disk 0 and 1 (MD2). The remaining space on
>> disk 0
>> and 1 are created as another MD device. Disk 2 and 3 are also created
as
>> one RAID1 (MD3) device. Formatting can be done like this
>>
>> MD0 has filesystem for /boot
>> MD1 has filesystem for /boot/efi
>> MD2 is used as LVM PV
>> MD3 is used as LVM PV
>
> Now it really seems like MDn are partitions with MD0-3 on disks 1&2
> and MD3 on disks 3&4?
>
>> All other filesystems like / or /var or /home... will be created on LVM
>> Logical Volumes to give you full flexibility to manage storage.
>
> Given using iRedMail which puts all mail store under /var/vmail, /var
> goes on disks 3&4.
>
> /home will be little stuff.? iRedMail components put their configs and
> data (like domain and user sql database) all over the places. Disks
> 1&2 will be basically empty.? Wish I could have found high quality 1TB
> drives for less...
>
> thanks
>
>>
>> Regards,
>> Simon
>>
>>> I also don't see how to build that boot USB stick.? I will have
the
>>> install ISO in the boot USB port and the 4 drives set up with
hardware
>>> RAID.? How are things figure out?? I am missing some important
piece
>>> here.
>>>
>>> Oh, HP does list Redhat support for this unit.
>>>
>>> thanks for all help.
>>>
>>> Bob
>>>
>>> On 1/6/23 11:45, Chris Adams wrote:
>>>> Once upon a time, Simon Matter <simon.matter at
invoca.ch> said:
>>>>> Are you sure that's still true? I've done it that
way in the past but
>>>>> it
>>>>> seems at least with EL8 you can put /boot/efi on md raid1
with
>>>>> metadata
>>>>> format 1.0. That way the EFI firmware will see it as two
independent
>>>>> FAT
>>>>> filesystems. Only thing you have to be sure is that nothing
ever
>>>>> writes
>>>>> to
>>>>> these filesystems when Linux is not running, otherwise your
/boot/efi
>>>>> md
>>>>> raid will become corrupt.
>>>>>
>>>>> Can someone who has this running confirm that it works?
>>>> Yes, that's even how RHEL/Fedora set it up currently I
believe.? But
>>>> like you say, it only works as long as there's no other OS
on the
>>>> system
>>>> and the UEFI firmware itself is never used to change anything
on
>>>> the FS.
>>>> It's not entirely clear that most UEFI firmwares would
handle a drive
>>>> failure correctly either (since it's outside the scope of
UEFI), so
>>>> IIRC
>>>> there's been some consideration in Fedora of dropping this
support.
>>>>
>>>> And... I'm not sure if GRUB2 handles RAID 1 /boot fully
correctly, for
>>>> things where it writes to the FS (grubenv updates for
"savedefault"
>>>> for
>>>> example).? But, there's other issues with GRUB2's FS
handling
>>>> anyway, so
>>>> this case is probably far down the list.
>>>>
>>>> I think that having RAID 1 for /boot and/or /boot/efi can be
helpful
>>>> (and I've set it up, definitely not saying "don't
do that"), but
>>>> has to
>>>> be handled with care and possibly (probably?) would need manual
>>>> intervention to get booting again after a drive failure or
>>>> replacement.
>>>>
>>> _______________________________________________
>>> CentOS mailing list
>>> CentOS at centos.org
>>> https://lists.centos.org/mailman/listinfo/centos
>>>
>>
>> _______________________________________________
>> CentOS mailing list
>> CentOS at centos.org
>> https://lists.centos.org/mailman/listinfo/centos
>
> _______________________________________________
> CentOS mailing list
> CentOS at centos.org
> https://lists.centos.org/mailman/listinfo/centos