I installed Centos 6.x 64 bit with the minimal ISO and used two disks in RAID 1 array. Filesystem Size Used Avail Use% Mounted on /dev/md2 97G 918M 91G 1% / tmpfs 16G 0 16G 0% /dev/shm /dev/md1 485M 54M 407M 12% /boot /dev/md3 3.4T 198M 3.2T 1% /vz Personalities : [raid1] md1 : active raid1 sda1[0] sdb1[1] 511936 blocks super 1.0 [2/2] [UU] md3 : active raid1 sda4[0] sdb4[1] 3672901440 blocks super 1.1 [2/2] [UU] bitmap: 0/28 pages [0KB], 65536KB chunk md2 : active raid1 sdb3[1] sda3[0] 102334336 blocks super 1.1 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md0 : active raid1 sdb2[1] sda2[0] 131006336 blocks super 1.1 [2/2] [UU] My question is if sda one fails will it still boot on sdb? Did the install process write the boot sector on both disks or just sda? How do I check and if its not on sdb how do I copy it there?
In article <CAAOM8FXumoSAgbDe+PzryraRUHcsWOjWJQf-3Mc0TSn4ODRt9w at mail.gmail.com>, Matt <matt.mailinglists at gmail.com> wrote:> I installed Centos 6.x 64 bit with the minimal ISO and used two disks > in RAID 1 array. > > Filesystem Size Used Avail Use% Mounted on > /dev/md2 97G 918M 91G 1% / > tmpfs 16G 0 16G 0% /dev/shm > /dev/md1 485M 54M 407M 12% /boot > /dev/md3 3.4T 198M 3.2T 1% /vz > > Personalities : [raid1] > md1 : active raid1 sda1[0] sdb1[1] > 511936 blocks super 1.0 [2/2] [UU] > md3 : active raid1 sda4[0] sdb4[1] > 3672901440 blocks super 1.1 [2/2] [UU] > bitmap: 0/28 pages [0KB], 65536KB chunk > md2 : active raid1 sdb3[1] sda3[0] > 102334336 blocks super 1.1 [2/2] [UU] > bitmap: 0/1 pages [0KB], 65536KB chunk > md0 : active raid1 sdb2[1] sda2[0] > 131006336 blocks super 1.1 [2/2] [UU] > > My question is if sda one fails will it still boot on sdb? Did the > install process write the boot sector on both disks or just sda? How > do I check and if its not on sdb how do I copy it there?Tests I did some years ago indicated that the install process does not write grub boot information onto sdb, only sda. This was on Fedora 3 or CentOS 4. I don't know if it has changed since then, but I always put the following in the %post section of my kickstart files: # install grub on the second disk too grub --batch <<EOF device (hd0) /dev/sdb root (hd0,0) setup (hd0) quit EOF Cheers Tony -- Tony Mountifield Work: tony at softins.co.uk - http://www.softins.co.uk Play: tony at mountifield.org - http://tony.mountifield.org
On Fri, Jan 24, 2014 at 11:58 AM, Matt <matt.mailinglists at gmail.com> wrote:> I installed Centos 6.x 64 bit with the minimal ISO and used two disks > in RAID 1 array. > > Filesystem Size Used Avail Use% Mounted on > /dev/md2 97G 918M 91G 1% / > tmpfs 16G 0 16G 0% /dev/shm > /dev/md1 485M 54M 407M 12% /boot > /dev/md3 3.4T 198M 3.2T 1% /vz > > Personalities : [raid1] > md1 : active raid1 sda1[0] sdb1[1] > 511936 blocks super 1.0 [2/2] [UU] > md3 : active raid1 sda4[0] sdb4[1] > 3672901440 blocks super 1.1 [2/2] [UU] > bitmap: 0/28 pages [0KB], 65536KB chunk > md2 : active raid1 sdb3[1] sda3[0] > 102334336 blocks super 1.1 [2/2] [UU] > bitmap: 0/1 pages [0KB], 65536KB chunk > md0 : active raid1 sdb2[1] sda2[0] > 131006336 blocks super 1.1 [2/2] [UU] > > My question is if sda one fails will it still boot on sdb? Did the > install process write the boot sector on both disks or just sda? How > do I check and if its not on sdb how do I copy it there? >I've found that the grub boot loader is only installed on the first disk. When I do use software raid, I have made a habit of manually installing grub on the other disks (using grub-install). In most cases I dedicated a RAID1 array to the host OS and have a separate array for storage. You can check to see that a boot loader is present with `file`. ~]# file -s /dev/sda /dev/sda: x86 boot sector; partition 1: ID=0xfd, active, starthead 1, startsector 63, 224847 sectors; partition 2: ID=0xfd, starthead 0, startsector 224910, 4016250 sectors; partition 3: ID=0xfd, starthead 0, startsector 4241160, 66878595 sectors, code offset 0x48 There are other ways to verify the boot loader is present, but that's the one I remember off the top of my head. Use grub-install to install grub to the MBR of the other disk.> _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos >-- ---~~.~~--- Mike // SilverTip257 //
> I installed Centos 6.x 64 bit with the minimal ISO and used two disks > in RAID 1 array. > > Filesystem Size Used Avail Use% Mounted on > /dev/md2 97G 918M 91G 1% / > tmpfs 16G 0 16G 0% /dev/shm > /dev/md1 485M 54M 407M 12% /boot > /dev/md3 3.4T 198M 3.2T 1% /vz > > Personalities : [raid1] > md1 : active raid1 sda1[0] sdb1[1] > 511936 blocks super 1.0 [2/2] [UU] > md3 : active raid1 sda4[0] sdb4[1] > 3672901440 blocks super 1.1 [2/2] [UU] > bitmap: 0/28 pages [0KB], 65536KB chunk > md2 : active raid1 sdb3[1] sda3[0] > 102334336 blocks super 1.1 [2/2] [UU] > bitmap: 0/1 pages [0KB], 65536KB chunk > md0 : active raid1 sdb2[1] sda2[0] > 131006336 blocks super 1.1 [2/2] [UU] > > My question is if sda one fails will it still boot on sdb? Did the > install process write the boot sector on both disks or just sda? How > do I check and if its not on sdb how do I copy it there?Based on input from everyone here I am thinking of an alternate setup. Single small inexpensive 64GB SSD used as /boot, / and swap. Putting /vz on software RAID1 array on the two 4TB drives. I can likely just zip tie the SSD in the 1u case somewhere since I have no more drive bays. Does this seem like a better layout?
On 01/24/2014 06:58 PM, Matt wrote:> I installed Centos 6.x 64 bit with the minimal ISO and used two disks > in RAID 1 array. > > Filesystem Size Used Avail Use% Mounted on > /dev/md2 97G 918M 91G 1% / > tmpfs 16G 0 16G 0% /dev/shm > /dev/md1 485M 54M 407M 12% /boot > /dev/md3 3.4T 198M 3.2T 1% /vzfor quite some time (since 5.x era) i use http://wiki.centos.org/HowTos/Install_On_Partitionable_RAID1 (with 6.x i don't even need the patch to mkinitrd) the mbr or whatever it is is written in /dev/md_d0 .. and thats it in bios you put both hdd to boot and if the first have a problem the second will boot, mail you that you have a degraded raid and start resync after you replaced the drive. (and you can do it live) HTH, Adrian