> I plan to upgrade an existing C7 computer which currently has one 256 GB > SSD to use mdadmin software RAID1 after adding two 4 TB M2. SSDs, the rest > of the system remaining the same. The system also has one additional > internal and one external harddisk but these should not be touched. The > system will continue to run C7. > > If I remember correctly, the existing SSD does not use a M2. slot so they > should be available for the new 4GB SSDs while the old SSD remains in > place. If I read the output from gparted correctly, this existing SSD is > partitioned as follows: > > - EFI system partition 260 MB and mounted as /boot/efi (formatted as > FAT32). > > - boot partition 1 GB and mounted as /boot (formatted as xfs). > > - /root, swap and /home seem to coexist on a LUKS encrypted partitition > using LVM2 PV. > > My current plan is to: > > - Use clonezilla to make individual backups of all partitions above. > > - Install the two M2. SSDs. > > - Use an external disk tool to partition the two new M2. SSDs as follows: > > -- Create a RAID1 partition for /boot - same size as current, ie 260 MB > > -- Create a RAID1 partition for /boot/efi - same size as current, ie 1 GB > > -- Create a RAID1 partition for LVM2 and LUKS - the rest of the 4 TB SSD > > Questions: > > - I do not see any benefit to breaking up the LVM2/LUKS partition > containing /root, /swap and /home into more than one RAID1 partition or am > I wrong? If the SSD fails, the entire SSD would fail and break the system, > hence I might as well keep it as one single RAID1 partition, or?What I usually do is this: "cut" the large disk into several pieces of equal size and create individual RAID1 arrays. Then add them as LVM PVs to one large VG. The advantage is that with one error on one disk, you wont lose redundancy on the whole RAID mirror but only on a partial segment. You can even lose another segment with an error on the other disk and still have redundancy if the error is in another part. That said, it's a bit more work to setup but has helped me several times in the decades ago.> > - Is the next step after the RAID1 partitioning above then to do a minimal > install of C7 followed by using clonezilla to restoring the LVM2/LUKS > partition?? > > - Any advice on using clonezilla? Or the external partitioning tool? > > ?- Finally, since these new SSDs are huge, perhaps I should take the > opportunity to increase the space for both /root and /swap? > > - /root is 50 GB - should I increase it to eg 100 GB? > > - The system currently has 32 GB of memory but I will likely upgrade it to > 64 GB (or even 128 GB), perhaps I should at this time already increase the > /swap space to 64 GB/128 GB?I'm also interested here to learn what others are doing in higher memory situations. I have some systems with half a TB memory and never configured more than 16GB of swap. I has usually worked well and when a system started to use swap heavily, there was something really wrong in an application and had to be fixed there. Additionally we've tuned the kernel VM settings so that it didn't want to swap too much. Because swapping was always slow anyway even on fast U.2 NVME SSD storage. Regards, Simon
On 1/11/23 02:09, Simon Matter wrote:>> I plan to upgrade an existing C7 computer which currently has one 256 GB >> SSD to use mdadmin software RAID1 after adding two 4 TB M2. SSDs, the rest >> of the system remaining the same. The system also has one additional >> internal and one external harddisk but these should not be touched. The >> system will continue to run C7..... trimming>> >> - I do not see any benefit to breaking up the LVM2/LUKS partition >> containing /root, /swap and /home into more than one RAID1 partition or am >> I wrong? If the SSD fails, the entire SSD would fail and break the system, >> hence I might as well keep it as one single RAID1 partition, or? > What I usually do is this: "cut" the large disk into several pieces of > equal size and create individual RAID1 arrays. Then add them as LVM PVs to > one large VG. The advantage is that with one error on one disk, you wont > lose redundancy on the whole RAID mirror but only on a partial segment. > You can even lose another segment with an error on the other disk and > still have redundancy if the error is in another part. > > That said, it's a bit more work to setup but has helped me several times > in the decades ago.Ah, now I begin to get it.? Separate partitions RAIDed.> >> - Is the next step after the RAID1 partitioning above then to do a minimal >> install of C7 followed by using clonezilla to restoring the LVM2/LUKS >> partition?? >> >> - Any advice on using clonezilla? Or the external partitioning tool? >> >> ?- Finally, since these new SSDs are huge, perhaps I should take the >> opportunity to increase the space for both /root and /swap? >> >> - /root is 50 GB - should I increase it to eg 100 GB? >> >> - The system currently has 32 GB of memory but I will likely upgrade it to >> 64 GB (or even 128 GB), perhaps I should at this time already increase the >> /swap space to 64 GB/128 GB? > I'm also interested here to learn what others are doing in higher memory > situations. I have some systems with half a TB memory and never configured > more than 16GB of swap. I has usually worked well and when a system > started to use swap heavily, there was something really wrong in an > application and had to be fixed there. Additionally we've tuned the kernel > VM settings so that it didn't want to swap too much. Because swapping was > always slow anyway even on fast U.2 NVME SSD storage.Perhaps you have not dealt with Firefox?? :) On my Fedora 35 notebook, it slowly gobbles memory and I have to quit it after some number of days and restart. Now I only have 16GB of memory, 16GB physical swap, and 8GB zram swap. Building a F37 system now and see how that works, I doubt there is any improved behavior with Firefox.> > Regards, > Simon > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos
On 01/11/2023 02:09 AM, Simon Matter wrote:> What I usually do is this: "cut" the large disk into several pieces of > equal size and create individual RAID1 arrays. Then add them as LVM PVs to > one large VG. The advantage is that with one error on one disk, you wont > lose redundancy on the whole RAID mirror but only on a partial segment. > You can even lose another segment with an error on the other disk and > still have redundancy if the error is in another part. > > That said, it's a bit more work to setup but has helped me several times > in the decades ago. > >But is your strategy of dividing the large disk into individual RAID1 arrays also applicable to SSDs? I have heard, perhaps incorrectly, that once a SSD fails, the entire SSD becomes unusable which would suggest that dividing it into multiple RAID1 arrays would not be useful?