Niki Kovacs
2010-Dec-04 17:34 UTC
[CentOS] Fiddling with software RAID1 : continue working with one of two disks failing?
Hi, I'm currently experimenting with software RAID1 on a spare PC with two 40 GB hard disks. Normally, on a desktop PC with only one hard disk, I have a very simple partitioning scheme like this : /dev/hda1 80 MB /boot ext2 /dev/hda2 1 GB swap /dev/hda3 39 GB / ext3 Here's what I'd like to do. Partition a second hard disk (say, /dev/hdb) with three partitions. Setup RAID1 like this : /dev/md0 80 MB /boot ext2 /dev/md1 1 GB swap /dev/md2 39 GB / ext3 I somehow managed to get this far. Here's what I have : [root at raymonde ~]# fdisk -l /dev/hda Disk /dev/hda: 41.1 GB, 41110142976 bytes 255 heads, 63 sectors/track, 4998 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 11 88326 fd Linux raid autodetect /dev/hda2 12 134 987997+ fd Linux raid autodetect /dev/hda3 135 4998 39070080 fd Linux raid autodetect [root at raymonde ~]# fdisk -l /dev/hdb Disk /dev/hdb: 41.1 GB, 41110142976 bytes 16 heads, 63 sectors/track, 79656 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Device Boot Start End Blocks Id System /dev/hdb1 * 1 156 78592+ fd Linux raid autodetect /dev/hdb2 157 2095 977256 fd Linux raid autodetect /dev/hdb3 2096 79656 39090744 fd Linux raid autodetect During install, my /dev/md1 and /dev/md2 got somehow mixed up, which doesn't really matter : [root at raymonde ~]# cat /etc/fstab /dev/md1 / ext3 defaults 1 1 /dev/md0 /boot ext2 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/md2 swap swap defaults 0 0 I wasn't sure where to install GRUB, so I chose /dev/md0. I was wondering if this setup theoretically enabled me to continue working with one disk failure. So I tried unplugging the power cord of one of my hard disks... which resulted in a "GRUB Disk Error" on boot. Question : is there a way to still run the system with either of the two disks "damaged" (in this case : unplugged)? And if so, how would I have to go about it in my setup? Cheers from the freezing South of France, Niki
Les Mikesell
2010-Dec-04 18:01 UTC
[CentOS] Fiddling with software RAID1 : continue working with one of two disks failing?
On 12/4/10 11:34 AM, Niki Kovacs wrote:> > I wasn't sure where to install GRUB, so I chose /dev/md0.Grub doesn't know anything about raid. It only works because each component of a RAID1 looks just like a non-raid filesystem. You should install grub on the master boot partition of both member disks.> I was wondering if this setup theoretically enabled me to continue > working with one disk failure. So I tried unplugging the power cord of > one of my hard disks... which resulted in a "GRUB Disk Error" on boot. > Question : is there a way to still run the system with either of the two > disks "damaged" (in this case : unplugged)? And if so, how would I have > to go about it in my setup?Yes, raid1 isn't bothered at all by missing members. You just have to install grub on the underlying disks as though you did not have raid. There can be some differences in the way ide/sata/scsi controllers handle a disk failure - and IDE's often hang the controller if one of two disks on the cable fails. Normally the first disk will always work if the 2nd fails, but if the first disk fails both the bios and linux have to see the same shift in positions - so even though you are installing grub on what it calls hd1, if it boots from there it will see its root at hd0,1. Worst case - just be prepared to boot a rescue disk and reinstall grub if your first disk fails. When you replace the drive and want to rebuild the mirror, just: mdadm --add /dev/md? /dev/sd?? -- Les Mikesell lesmikesell at gmail.com
Robert Heller
2010-Dec-04 18:49 UTC
[CentOS] Fiddling with software RAID1 : continue working with one of two disks failing?
At Sat, 04 Dec 2010 18:34:26 +0100 CentOS mailing list <centos at centos.org> wrote:> > Hi, > > I'm currently experimenting with software RAID1 on a spare PC with two > 40 GB hard disks. Normally, on a desktop PC with only one hard disk, I > have a very simple partitioning scheme like this : > > /dev/hda1 80 MB /boot ext2 > /dev/hda2 1 GB swap > /dev/hda3 39 GB / ext3 > > Here's what I'd like to do. Partition a second hard disk (say, /dev/hdb) > with three partitions. Setup RAID1 like this : > > /dev/md0 80 MB /boot ext2 > /dev/md1 1 GB swap > /dev/md2 39 GB / ext3 > > I somehow managed to get this far. Here's what I have : > > [root at raymonde ~]# fdisk -l /dev/hda > > Disk /dev/hda: 41.1 GB, 41110142976 bytes > 255 heads, 63 sectors/track, 4998 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > > Device Boot Start End Blocks Id System > /dev/hda1 * 1 11 88326 fd Linux raid autodetect > /dev/hda2 12 134 987997+ fd Linux raid autodetect > /dev/hda3 135 4998 39070080 fd Linux raid autodetect > > [root at raymonde ~]# fdisk -l /dev/hdb > > Disk /dev/hdb: 41.1 GB, 41110142976 bytes > 16 heads, 63 sectors/track, 79656 cylinders > Units = cylinders of 1008 * 512 = 516096 bytes > > Device Boot Start End Blocks Id System > /dev/hdb1 * 1 156 78592+ fd Linux raid autodetect > /dev/hdb2 157 2095 977256 fd Linux raid autodetect > /dev/hdb3 2096 79656 39090744 fd Linux raid autodetect > > During install, my /dev/md1 and /dev/md2 got somehow mixed up, which > doesn't really matter : > > [root at raymonde ~]# cat /etc/fstab > /dev/md1 / ext3 defaults 1 1 > /dev/md0 /boot ext2 defaults 1 2 > tmpfs /dev/shm tmpfs defaults 0 0 > devpts /dev/pts devpts gid=5,mode=620 0 0 > sysfs /sys sysfs defaults 0 0 > proc /proc proc defaults 0 0 > /dev/md2 swap swap defaults 0 0 > > I wasn't sure where to install GRUB, so I chose /dev/md0.No, you install GRUB (or alternitively, lilo) on *both* /dev/hda AND /dev/hdb, with your root /dev/hda1. Neither grub (nor lilo) know about RAID (ditto for the BIOS). This is not a problem, since the *elements* of a RAID1 set look like 'normal' partitions with normal file systems on them. You want grub to be in the MBR of /dev/hda -- duping it in /dev/hdb's MBR allows you to boot (with degraded RAID sets) from /dev/hdb (cabled & jumpered to be /dev/hda) in the event /dev/hda dies.> > I was wondering if this setup theoretically enabled me to continue > working with one disk failure. So I tried unplugging the power cord of > one of my hard disks... which resulted in a "GRUB Disk Error" on boot. > > Question : is there a way to still run the system with either of the two > disks "damaged" (in this case : unplugged)? And if so, how would I have > to go about it in my setup?Yes, see above. Minor performance nit: Doing RAID with two IDE disks on the *same* controller is not going to buy you anything in terms of performance. I suspect this is just experimental, mostly to get the feel for how to set things up, so this is not a major issue.> > Cheers from the freezing South of France, > > Niki > > > > > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos > >-- Robert Heller -- 978-544-6933 / heller at deepsoft.com Deepwoods Software -- http://www.deepsoft.com/ () ascii ribbon campaign -- against html e-mail /\ www.asciiribbon.org -- against proprietary attachments