CentOS 5, original kernel (xen and normal) and everything, Linux RAID 1. I rebooted one of my machines after doing some changes to RAID/LVM and now the two RAID partitions that I made changes to are "gone". I cannot boot into the system. On bootup it tells me that the devices md2 and md3 are busy or mounted and drops me to the repair shell. When I run fs check manually it just tells me the same. mdadm --misc --detail tells me that md2 and md3 are active and fine. I wanted to comment out the md2 and md3 devices in fstab (and hoped then be able to boot) but I get a "read-only" warning when writing to it although mount tells me that / is mounted rw. What can I do to boot into the system (the system is on /dev/md1 and seems to be fine) or repair it? The history of the changes is as follows. Originally I had several software-RAID 1 partitions /boot / /home /home2 on /dev/md0 etc. At the time of creation I didn't know I could use LVM on RAID partitions. Yesterday I activated LVM on md2 and md3 as they didn't contain anything valuable and put some data on them. What I did is unmount, then remove from RAID, then initialized LVM, then created the RAID devices again, then created the Volume Groups and Volumes, added mount points etc. All succeeded without errors and was working well thereafter. I worked several hours with data and xen virtual machines on the LVM partitions. The LVM management was done with the LVM manager in Gnome (as I'm not very familiar with LVM), the other stuff was done in a terminal. I assume something is wrong with the LVM setup or LVM doesn't start at all. What/how can I check? Thanks. Kai -- Kai Sch?tzl, Berlin, Germany Get your web at Conactive Internet Services: http://www.conactive.com
On Wed, Oct 17, 2007 at 04:54:55PM +0200, Kai Schaetzl wrote:> CentOS 5, original kernel (xen and normal) and everything, Linux RAID 1. > > I rebooted one of my machines after doing some changes to RAID/LVM and now > the two RAID partitions that I made changes to are "gone". I cannot boot > into the system. > On bootup it tells me that the devices md2 and md3 are busy or mounted and > drops me to the repair shell. When I run fs check manually it just tells > me the same. mdadm --misc --detail tells me that md2 and md3 are active > and fine. I wanted to comment out the md2 and md3 devices in fstab (and > hoped then be able to boot) but I get a "read-only" warning when writing > to it although mount tells me that / is mounted rw.mount uses /etc/mtab for displaying current mounts, which is invalid when starting the boot. Check /proc/mounts for the correct values. You can switch to rw with: mount / -o remount,rw And then you'll be able to change fstab.> > What can I do to boot into the system (the system is on /dev/md1 and seems > to be fine) or repair it?the b option to init/boot boots in emergency mode. In extreme cases, init=/bin/bash to jump directly to a shell, and then do the remount.> The history of the changes is as follows. > Originally I had several software-RAID 1 partitions /boot / /home /home2 > on /dev/md0 etc. At the time of creation I didn't know I could use LVM on > RAID partitions. Yesterday I activated LVM on md2 and md3 as they didn't > contain anything valuable and put some data on them. What I did is > unmount, then remove from RAID, then initialized LVM, then created the > RAID devices again, then created the Volume Groups and Volumes, added > mount points etc. All succeeded without errors and was working well > thereafter. I worked several hours with data and xen virtual machines on > the LVM partitions. The LVM management was done with the LVM manager in > Gnome (as I'm not very familiar with LVM), the other stuff was done in a > terminal. > I assume something is wrong with the LVM setup or LVM doesn't start at > all. What/how can I check?Seems to be OK. What is happening is that you're telling the system to check the filesystems that where in the MDs in fstab. As there's none (it's lvm now), the boot process complains and drops you to a shell. As soon as you fix fstab, you should boot ok. The lvm volumes/groups should be already present, them. -- lfr 0/0 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: <http://lists.centos.org/pipermail/centos/attachments/20071017/bc6a7271/attachment-0004.sig>
Luciano Rocha wrote on Wed, 17 Oct 2007 16:08:31 +0100:> mount uses /etc/mtab for displaying current mounts, which is invalid > when starting the boot. Check /proc/mounts for the correct values. > > You can switch to rw with: > mount / -o remount,rw > > And then you'll be able to change fstab.Yeah, this worked, thanks. I'll write that down :-) It would be nice if the system would ignore the problems with md2 and md3 and boot nevertheless as in this case it would have been harmless.> the b option to init/boot boots in emergency mode.If needed, where would I do that? Can I do an init -b 3 in the repair shell or where would I do this?> Seems to be OK. What is happening is that you're telling the system to > check the filesystems that where in the MDs in fstab. As there's none > (it's lvm now), the boot process complains and drops you to a shell.Indeed. I thought that using LVM manager would make the necessary changes (whatever they were) for me. I always avoided LVM as much as I could until recently and when I used it I did that already during installation. This was the first time I changed this stuff on a running system. I learned something today :-) I added the /dev/mapper entries as mounts to fstab now and remounted all and everything is well. Thanks for the quick help! I have a small question, though: one of the LVM partitions is used for a (non-active) Xen VM and I cannot mount that as ext3. I know I have to unmount before I can run the VM on it. I want to have a look in it. Is there a way to mount it? xdva isn't recognized as a filesystem. Kai -- Kai Sch?tzl, Berlin, Germany Get your web at Conactive Internet Services: http://www.conactive.com