H
2020-Oct-22 23:07 UTC
[CentOS] ThinkStation with BIOS RAID and disk error messages in gparted
My ThinkStation runs CentOS 7 which I installed on a BIOS RAID 0 setup with two identical 256 Gb SSDs after removing Windows. It runs fine but I just discovered in gparted something that does not seem right: - Launching gparted it complains "invalid argument during seek for red on /dev/md126" and when I click on Ignore I get another error "The backup GPT table is corrupt, but the primary appears OK, so that will be used." I then click on OK whereupon I again see the second error message. I then see "Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 6832 blocks) or continue with the current setting? " I click on Fix but nothing seems to happen. I am not sure what /dev/md126 is but it is the exact same size as sda and sdb which I believe are the two RAID disks. I also have two other hard disks which seem to be fine, one using XFS, the other ZFS. Does this look familiar to anyone? Given the error messages it seems this is something I ought to fix sooner rather than later. Any idea what I should do? Thanks.
Simon Matter
2020-Oct-23 07:29 UTC
[CentOS] ThinkStation with BIOS RAID and disk error messages in gparted
> My ThinkStation runs CentOS 7 which I installed on a BIOS RAID 0 setup > with two identical 256 Gb SSDs after removing Windows. It runs fine but I > just discovered in gparted something that does not seem right: > > - Launching gparted it complains "invalid argument during seek for red on > /dev/md126" and when I click on Ignore I get another error "The backup GPT > table is corrupt, but the primary appears OK, so that will be used." I > then click on OK whereupon I again see the second error message. I then > see "Not all of the space available to /dev/sdb appears to be used, you > can fix the GPT to use all of the space (an extra 6832 blocks) or continue > with the current setting? " I click on Fix but nothing seems to happen. I > am not sure what /dev/md126 is but it is the exact same size as sda and > sdb which I believe are the two RAID disks. I also have two other hard > disks which seem to be fine, one using XFS, the other ZFS. > > Does this look familiar to anyone? Given the error messages it seems this > is something I ought to fix sooner rather than later. Any idea what I > should do?I'm a bit confused what you have here. Did you mix pseudo hardware RAID (BIOS RAID 0) with software RAID here? Because /dev/md126 clearly is part of a software RAID. Regards, Simon
Chris Adams
2020-Oct-23 14:07 UTC
[CentOS] ThinkStation with BIOS RAID and disk error messages in gparted
Once upon a time, Simon Matter <simon.matter at invoca.ch> said:> I'm a bit confused what you have here. Did you mix pseudo hardware RAID > (BIOS RAID 0) with software RAID here? Because /dev/md126 clearly is part > of a software RAID.IIRC the old dmraid support for motherboard RAID has been phased out, but mdraid has grown support for Intel (and maybe some other?) common motherboard RAID. So, /dev/md<foo> doesn't inherently mean "Linux software RAID" for a while now. -- Chris Adams <linux at cmadams.net>
H
2020-Oct-23 15:19 UTC
[CentOS] ThinkStation with BIOS RAID and disk error messages in gparted
On 10/23/2020 03:29 AM, Simon Matter wrote:>> My ThinkStation runs CentOS 7 which I installed on a BIOS RAID 0 setup >> with two identical 256 Gb SSDs after removing Windows. It runs fine but I >> just discovered in gparted something that does not seem right: >> >> - Launching gparted it complains "invalid argument during seek for red on >> /dev/md126" and when I click on Ignore I get another error "The backup GPT >> table is corrupt, but the primary appears OK, so that will be used." I >> then click on OK whereupon I again see the second error message. I then >> see "Not all of the space available to /dev/sdb appears to be used, you >> can fix the GPT to use all of the space (an extra 6832 blocks) or continue >> with the current setting? " I click on Fix but nothing seems to happen. I >> am not sure what /dev/md126 is but it is the exact same size as sda and >> sdb which I believe are the two RAID disks. I also have two other hard >> disks which seem to be fine, one using XFS, the other ZFS. >> >> Does this look familiar to anyone? Given the error messages it seems this >> is something I ought to fix sooner rather than later. Any idea what I >> should do? > I'm a bit confused what you have here. Did you mix pseudo hardware RAID > (BIOS RAID 0) with software RAID here? Because /dev/md126 clearly is part > of a software RAID. > > Regards, > Simon > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centosNot that I know of but how do I check my configuration?
Possibly Parallel Threads
- ThinkStation with BIOS RAID and disk error messages in gparted
- ThinkStation with BIOS RAID and disk error messages in gparted
- ThinkStation with BIOS RAID and disk error messages in gparted
- CentOS 6.0 CR mdadm-3.2.2 breaks Intel BIOS RAID
- more software raid questions