Hello All, I have a machine that crashed. Some part of the motherboard (power supply-related) went south. The motherboard, CPU and memory have been replaced with a much newer architecture. The OS and data are intact on two SATA drives that were RAID1 with LVM. I am going to use 'linux rescue' to recover the LVM backup so I can mount the RAIDs (there were two) in a new CentOS install, on a third disk. I have no indication that I could recover the previous CentOS (somewhere between CentOS 5.1 and 5.2 on updates). Can I use 'linux rescue' to fix that OS up to boot it? The kernel panics in its current state (because the hardware architecture is so strikingly different). What is the methodology of fixing the kernel in this circumstance? Thanks in Advance! Glenn
Bill Campbell
2009-Jan-19 22:18 UTC
[CentOS] Help with a good recovery plan.. Linux rescue?
On Mon, Jan 19, 2009, Glenn wrote:>Hello All, > >I have a machine that crashed. Some part of the motherboard (power >supply-related) went south. > >The motherboard, CPU and memory have been replaced with a much newer >architecture. The OS and data are intact on two SATA drives that were >RAID1 with LVM. > >I am going to use 'linux rescue' to recover the LVM backup so I can >mount the RAIDs (there were two) in a new CentOS install, on a third disk. > >I have no indication that I could recover the previous CentOS >(somewhere between CentOS 5.1 and 5.2 on updates). > >Can I use 'linux rescue' to fix that OS up to boot it? The kernel >panics in its current state (because the hardware architecture is so >strikingly different). What is the methodology of fixing the kernel >in this circumstance?I don't know that this will be a major problem if my experience years ago going from Caldera OpenLinux something-or-other to SuSE Pro was any example. The old Caldera system had a multi-disk LVM RAID which I had moved to a newer system, fully expecting to lose the data after installing a new Linux OS on the system. I did a fresh install on the primary HD without touching the external RAID drives, and, much to my surprise, the new Linux found the RAID drives, asking if I wanted to mount them. Bill -- INTERNET: bill at celestial.com Bill Campbell; Celestial Software LLC URL: http://www.celestial.com/ PO Box 820; 6641 E. Mercer Way Voice: (206) 236-1676 Mercer Island, WA 98040-0820 Fax: (206) 232-9186 My reading of history convinces me that most bad government results from too much government. --Thomas Jefferson.
Joseph L. Casale
2009-Jan-20 05:40 UTC
[CentOS] Help with a good recovery plan.. Linux rescue?
>Can I use 'linux rescue' to fix that OS up to boot it? The kernel >panics in its current state (because the hardware architecture is so >strikingly different). What is the methodology of fixing the kernel >in this circumstance?You likely don't have block device modules for whatever controller this new rig has in your initrd. Deduce that in your rescue environment (look for something obvious), here are some examples (IDE|SATA|SCSI): # lspci 00:1f.2 IDE interface: Intel Corporation 82801H (ICH8 Family) 4 port SATA IDE Controller (rev 02) 00:1f.5 IDE interface: Intel Corporation 82801H (ICH8 Family) 2 port SATA IDE Controller (rev 02) 03:00.0 SATA controller: JMicron Technologies, Inc. JMicron 20360/20363 AHCI Controller (rev 03) 03:00.1 IDE interface: JMicron Technologies, Inc. JMicron 20360/20363 AHCI Controller (rev 03) Figure out what module supports it. Maybe see where it resides in the modules/src tree? # lsmod ata_piix 22341 4 libata 143997 2 ahci,ata_piix scsi_mod 134605 3 sg,libata,sd_mod # locate scsi_mod /lib/modules/2.6.18-92.1.18.el5/kernel/drivers/scsi/scsi_mod.ko /usr/src/kernels/linux-2.6.28-rc6/drivers/scsi/scsi_module.c Obviously scsi_mod is a scsi module (Block Device), and I am using it... Then either add them to your modprobe.conf which is read by mkinitrd or use cli options in your mkinitrd command. # cat /etc/modprobe.conf alias eth0 e100 alias eth1 r8169 alias scsi_hostadapter ata_piix alias scsi_hostadapter1 ahci Generate this new initrd (man mkinitrd) and force the kernel you currently have *on the dead system*, look in /boot. Edit grub.conf to use this new initrd, make a new stanza maybe? Edit your fstab to reflect any changes that might occur as a result, such as migrating from an IDE block device to a SCSI one: /dev/had --> /dev/sda etc. unless of course you are using LVM on the system disc as well. I suspect (but you didn't provide much info) that since these very same discs are in the new machine, they have a compatible interface which hasn't changed (obviously, they were SCSI or SATA etc) and your new rig likely has newer devices which need newer modules maybe like if the system disc was IDE or SATA on an old controller? Could be something else, post back with any progress. HTH, jlc