I am trying to upgrade my system from 500GB drives to 1TB. I was able to partition and sync the raid devices, but I cannot get the new drive to boot. This is an old system with only IDE ports. There is an added Highpoint raid card which is used only for the two extra IDE ports. I have upgraded it with a 1TB SATA drive and an IDE-SATA adapter. I did not have any problems with the system recognizing the drive or adding it to the mdraid. A short SMART test shows no errors. Partitions: Disk /dev/hdg: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdg1 1 25 200781 fd Linux raid autodetect /dev/hdg2 26 121537 976045140 fd Linux raid autodetect /dev/hdg3 121538 121601 514080 fd Linux raid autodetect Raid: Personalities : [raid1] md0 : active raid1 hdg1[1] hde1[0] 200704 blocks [2/2] [UU] md1 : active raid1 hdg3[1] hde3[0] 513984 blocks [2/2] [UU] md2 : active raid1 hdg2[1] hde2[0] 487644928 blocks [2/2] [UU] fstab (unrelated lines removed): /dev/md2 / ext3 defaults 1 1 /dev/md0 /boot ext3 defaults 1 2 /dev/md1 swap swap defaults 0 0 I installed grub on the new drive: grub> device (hd0) /dev/hdg grub> root (hd0,0) Filesystem type is ext2fs, partition type 0xfd grub> setup (hd0) Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded. succeeded Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded Done. But when I attempt to boot from the drive (with or without the other drive connected and in either IDE connector on the Highpoint card), it fails. Grub attempts to boot, but the last thing I see after the bios is the line "GRUB Loading stage 1.5", then the screen goes black, the system speaker beeps, and the machine reboots. This will continue as long as I let it. As soon as I switch the boot drive back to the original hard drive, It boots up normally. I also tried installing grub as (hd1) with the same results. A few Google searches haven't turned up any hits with this particular problem and all of the similar problems have been with Ubuntu and grub2. Any suggestions? Thanks, -- Bowie
Bowie Bailey wrote:> I am trying to upgrade my system from 500GB drives to 1TB. I was able > to partition and sync the raid devices, but I cannot get the new drive > to boot. > > This is an old system with only IDE ports. There is an added Highpoint > raid card which is used only for the two extra IDE ports. I have > upgraded it with a 1TB SATA drive and an IDE-SATA adapter. I did not > have any problems with the system recognizing the drive or adding it to > the mdraid. A short SMART test shows no errors.<snip> Trying to get your configuration clear in my mind - the drives are 1TB IDE, and they're attached to the m/b, or to the Hpt RAID card? Also, did you update the system? New kernel? If so, is the RAID card recognized (we've got a Hpt RocketRaid card in a CentOS 6 system, and we're *finally* replacing it with an LSI (once it comes in), because Hpt does not care about old cards, and I had to find the source code, and then hack it to compile it for the new kernel, and have had to recompile for the new kernels we've installed.... mark
m.roth at 5-cent.us wrote:> Bowie Bailey wrote: >> I am trying to upgrade my system from 500GB drives to 1TB. I was able >> to partition and sync the raid devices, but I cannot get the new drive >> to boot. >> >> This is an old system with only IDE ports. There is an added Highpoint >> raid card which is used only for the two extra IDE ports. I have >> upgraded it with a 1TB SATA drive and an IDE-SATA adapter. I did not >> have any problems with the system recognizing the drive or adding it to >> the mdraid. A short SMART test shows no errors. > <snip> > Trying to get your configuration clear in my mind - the drives are 1TB > IDE, and they're attached to the m/b, or to the Hpt RAID card? > > Also, did you update the system? New kernel? If so, is the RAID card > recognized (we've got a Hpt RocketRaid card in a CentOS 6 system, and > we're *finally* replacing it with an LSI (once it comes in), because Hpt > does not care about old cards, and I had to find the source code, and then > hack it to compile it for the new kernel, and have had to recompile for > the new kernels we've installed.... >To follow myself up, I forgot one thing I'd intended to ask: is it possible that you needed to rebuild the initrd? mark
On 8/5/2015 11:27 AM, m.roth at 5-cent.us wrote:> Bowie Bailey wrote: >> I am trying to upgrade my system from 500GB drives to 1TB. I was able >> to partition and sync the raid devices, but I cannot get the new drive >> to boot. >> >> This is an old system with only IDE ports. There is an added Highpoint >> raid card which is used only for the two extra IDE ports. I have >> upgraded it with a 1TB SATA drive and an IDE-SATA adapter. I did not >> have any problems with the system recognizing the drive or adding it to >> the mdraid. A short SMART test shows no errors. > <snip> > Trying to get your configuration clear in my mind - the drives are 1TB > IDE, and they're attached to the m/b, or to the Hpt RAID card? > > Also, did you update the system? New kernel? If so, is the RAID card > recognized (we've got a Hpt RocketRaid card in a CentOS 6 system, and > we're *finally* replacing it with an LSI (once it comes in), because Hpt > does not care about old cards, and I had to find the source code, and then > hack it to compile it for the new kernel, and have had to recompile for > the new kernels we've installed....It was originally a pair of 500GB IDE drives in an mdraid mirror configuration. Right now, I have removed one 500GB drive and replaced it with a 1TB SATA drive with an IDE-SATA adapter. Both drives are connected to the Highpoint card and apparently working fine other than the boot-up problem. I was considering adding an SATA card to the system, but I didn't want to deal with finding drivers for a card old enough to work with this system (32-bit PCI). I have not done any updates to the system in quite some time. -- Bowie
On Wed, Aug 5, 2015 at 9:12 AM, Bowie Bailey <Bowie_Bailey at buc.com> wrote:> I am trying to upgrade my system from 500GB drives to 1TB.I'm going to guess that there are no IDE drives that have 4096 byte physical sectors, but it's worth confirming you don't have such a drive because the current partition scheme you've posted would be sub-optimal if it does have 4096 byte sectors. I was able to> partition and sync the raid devices, but I cannot get the new drive to boot. > > This is an old system with only IDE ports. There is an added Highpoint raid > card which is used only for the two extra IDE ports. I have upgraded it with > a 1TB SATA drive and an IDE-SATA adapter. I did not have any problems with > the system recognizing the drive or adding it to the mdraid. A short SMART > test shows no errors. > > Partitions: > Disk /dev/hdg: 1000.2 GB, 1000204886016 bytes > 255 heads, 63 sectors/track, 121601 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > > Device Boot Start End Blocks Id System > /dev/hdg1 1 25 200781 fd Linux raid > autodetect > /dev/hdg2 26 121537 976045140 fd Linux raid > autodetect > /dev/hdg3 121538 121601 514080 fd Linux raid > autodetectIn the realm of totally esoteric and not likely the problem, 0xfd is for mdadm metadata v0.9 which uses kernel autodetect. If the mdadm metadata is 1.x then the type code ought to be 0xda but this is so obscure that parted doesn't even support it. fdisk does but I don't know when support was added. This uses initrd autodetect rather than the deprecated kernel autodetect. It's fine to use 0.9 even though it's deprecated. You can use mdadm -E on each member device (each partition) to find out what metadata version is being used. Normally GRUB stage 1.5 is not needed, stage 1 can jump directly to stage 2 if it's in the MBR gap. But your partition scheme doesn't have an MBR gap, you've started the first partition at LBA 1. So that means it'll have to use block lists...> I installed grub on the new drive: > grub> device (hd0) /dev/hdg > > grub> root (hd0,0) > Filesystem type is ext2fs, partition type 0xfd > > grub> setup (hd0) > Checking if "/boot/grub/stage1" exists... no > Checking if "/grub/stage1" exists... yes > Checking if "/grub/stage2" exists... yes > Checking if "/grub/e2fs_stage1_5" exists... yes > Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded. > succeeded > Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 > /grub/grub.conf"... succeeded > Done.I'm confused. I don't know why this succeeds because the setup was pointed to hd0, which means the entire disk, not a partition, and yet the disk doesn't have an MBR gap. So there's no room for GRUB stage 2.> > But when I attempt to boot from the drive (with or without the other drive > connected and in either IDE connector on the Highpoint card), it fails. > Grub attempts to boot, but the last thing I see after the bios is the line > "GRUB Loading stage 1.5", then the screen goes black, the system speaker > beeps, and the machine reboots. This will continue as long as I let it. As > soon as I switch the boot drive back to the original hard drive, It boots up > normally.Yeah it says it's succeeding but it really isn't, I think. The problem is not the initrd yet, because that could be totally busted or missing, and you should still get a GRUB menu. This is all a failure of getting to stage 2, which then can read the file system and load the rest of its modules.> I also tried installing grub as (hd1) with the same results.I'm disinclined to believe that hd0 or hd1 translate into hdg, but I forget how to list devices in GRUB legacy. I'm going to bet though that device.map is stale and it probably needs to be recreated, and then find out what the proper hdX is for hdg. And then I think you're going to need to point it at a partition using hdX,Y. -- Chris Murphy
On Wed, Aug 5, 2015 at 10:34 AM, Chris Murphy <lists at colorremedies.com> wrote:> On Wed, Aug 5, 2015 at 9:12 AM, Bowie Bailey <Bowie_Bailey at buc.com> wrote: >> I am trying to upgrade my system from 500GB drives to 1TB. > > I'm going to guess that there are no IDE drives that have 4096 byte > physical sectors, but it's worth confirming you don't have such a > drive because the current partition scheme you've posted would be > sub-optimal if it does have 4096 byte sectors.Oops. I just reread that this is now SATA. New versions of hdparm and smartctl can tell you if the drive is Advanced Format, and if it is, then I recommend redoing the partition scheme so it's 4K aligned. And so that it has an MBR gap. The current way to do this is have the 1st partition start at LBA 2048. -- Chris Murphy
On 8/5/2015 12:34 PM, Chris Murphy wrote:> On Wed, Aug 5, 2015 at 9:12 AM, Bowie Bailey <Bowie_Bailey at buc.com> wrote: >> I am trying to upgrade my system from 500GB drives to 1TB. > I'm going to guess that there are no IDE drives that have 4096 byte > physical sectors, but it's worth confirming you don't have such a > drive because the current partition scheme you've posted would be > sub-optimal if it does have 4096 byte sectors.The partition table was originally created by the installer.> I was able to >> partition and sync the raid devices, but I cannot get the new drive to boot. >> >> This is an old system with only IDE ports. There is an added Highpoint raid >> card which is used only for the two extra IDE ports. I have upgraded it with >> a 1TB SATA drive and an IDE-SATA adapter. I did not have any problems with >> the system recognizing the drive or adding it to the mdraid. A short SMART >> test shows no errors. >> >> Partitions: >> Disk /dev/hdg: 1000.2 GB, 1000204886016 bytes >> 255 heads, 63 sectors/track, 121601 cylinders >> Units = cylinders of 16065 * 512 = 8225280 bytes >> >> Device Boot Start End Blocks Id System >> /dev/hdg1 1 25 200781 fd Linux raid >> autodetect >> /dev/hdg2 26 121537 976045140 fd Linux raid >> autodetect >> /dev/hdg3 121538 121601 514080 fd Linux raid >> autodetect > In the realm of totally esoteric and not likely the problem, 0xfd is > for mdadm metadata v0.9 which uses kernel autodetect. If the mdadm > metadata is 1.x then the type code ought to be 0xda but this is so > obscure that parted doesn't even support it. fdisk does but I don't > know when support was added. This uses initrd autodetect rather than > the deprecated kernel autodetect. It's fine to use 0.9 even though > it's deprecated. > > You can use mdadm -E on each member device (each partition) to find > out what metadata version is being used.Version : 0.90.00> Normally GRUB stage 1.5 is not needed, stage 1 can jump directly to > stage 2 if it's in the MBR gap. But your partition scheme doesn't have > an MBR gap, you've started the first partition at LBA 1. So that means > it'll have to use block lists... > >> I installed grub on the new drive: >> grub> device (hd0) /dev/hdg >> >> grub> root (hd0,0) >> Filesystem type is ext2fs, partition type 0xfd >> >> grub> setup (hd0) >> Checking if "/boot/grub/stage1" exists... no >> Checking if "/grub/stage1" exists... yes >> Checking if "/grub/stage2" exists... yes >> Checking if "/grub/e2fs_stage1_5" exists... yes >> Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded. >> succeeded >> Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 >> /grub/grub.conf"... succeeded >> Done. > I'm confused. I don't know why this succeeds because the setup was > pointed to hd0, which means the entire disk, not a partition, and yet > the disk doesn't have an MBR gap. So there's no room for GRUB stage 2.I'm not sure. It's been so long that I don't remember what I did (if anything) to get grub working on the second drive of the set. The first drive was configured by the installer. What I'm doing now is what I found to work for my backup system which gets a new drive in the raid set every month.>> But when I attempt to boot from the drive (with or without the other drive >> connected and in either IDE connector on the Highpoint card), it fails. >> Grub attempts to boot, but the last thing I see after the bios is the line >> "GRUB Loading stage 1.5", then the screen goes black, the system speaker >> beeps, and the machine reboots. This will continue as long as I let it. As >> soon as I switch the boot drive back to the original hard drive, It boots up >> normally. > Yeah it says it's succeeding but it really isn't, I think. The problem > is not the initrd yet, because that could be totally busted or > missing, and you should still get a GRUB menu. This is all a failure > of getting to stage 2, which then can read the file system and load > the rest of its modules. > > >> I also tried installing grub as (hd1) with the same results. > I'm disinclined to believe that hd0 or hd1 translate into hdg, but I > forget how to list devices in GRUB legacy. I'm going to bet though > that device.map is stale and it probably needs to be recreated, and > then find out what the proper hdX is for hdg. And then I think you're > going to need to point it at a partition using hdX,Y.I'm willing to give that a try. The device.map looks good to me: (hd0) /dev/hde (hd1) /dev/hdg It is old, but the drives are still connected to the same connectors, so it should still be valid. How would I go about pointing it at the partition? What I am currently doing is this: device (hd0) /dev/hdg root (hd0,0) setup (hd0) Would I just need to change the setup line to "setup (hd0,0)", or is there more to it than that? Also, the partitions are mirrored, so if I install to a partition, I will affect the working drive as well. I'm not sure I want to risk breaking the setup that still works. I can take this machine down for testing pretty much whenever I need to, but I can't leave it down for an extended period of time. -- Bowie
On 08/05/2015 08:12 AM, Bowie Bailey wrote:> > This is an old system with only IDE ports. There is an added > Highpoint raid card which is used only for the two extra IDE ports.Why "extra"? Are there drives connected to this system other than the two you're discussing for the software RAID sets? I know you said that you can't take the system down for an extended period of time. Do you have enough time to connect the two 1TB drives and nothing else, and do a new install? It would be useful to know if such an install booted, to exclude the possibility that there's some fundamental incompatibility between some combination of the BIOS, the Highpoint boot ROM, and the 1TB drives. If it doesn't boot, you have the option of putting the bootloader, kernel, and initrd on some other media. You could boot from an optical disc, or a USB drive, or CF.
On 8/6/2015 3:56 PM, Gordon Messmer wrote:> On 08/05/2015 08:12 AM, Bowie Bailey wrote: >> >> This is an old system with only IDE ports. There is an added >> Highpoint raid card which is used only for the two extra IDE ports. > > Why "extra"? Are there drives connected to this system other than the > two you're discussing for the software RAID sets? > > I know you said that you can't take the system down for an extended > period of time. Do you have enough time to connect the two 1TB drives > and nothing else, and do a new install? It would be useful to know if > such an install booted, to exclude the possibility that there's some > fundamental incompatibility between some combination of the BIOS, the > Highpoint boot ROM, and the 1TB drives. > > If it doesn't boot, you have the option of putting the bootloader, > kernel, and initrd on some other media. You could boot from an > optical disc, or a USB drive, or CF.To be honest, I don't remember why the Highpoint card was used. It could be that I had originally intended to use the raid capabilities of the card, or maybe I just didn't want the two members of the mirror to be master/slave on the same IDE channel. Doing a new install on the two 1TB drives is my current plan. If that works, I can connect the old drive, copy over all the data, and then try to figure out what I need to do to get all the programs running again. -- Bowie