hello all, I have a setup that is raid 1 and put the mirrored drive back in and now it is still showing as degraded saying: raid1: raid set md6 active with 1 out of 2 mirrors with this message on all the raids.i know i am wrong by saying this but i thought putting in the driving and rebooting would start the re syncing itself. what do i have to do to add this back in, i am so confused with this process. centos 4.x
Steven Vishoot wrote:> hello all, > > I have a setup that is raid 1 and put the mirrored drive back in and now it is still showing as degraded saying: raid1: raid set md6 active with 1 out of 2 mirrors with this message on all the raids.i know i am wrong by saying this but i thought putting in the driving and rebooting would start the re syncing itself. what do i have to do to add this back in, i am so confused with this process.A 'cat /proc/mdstat' will show which partitions are active in each md set. Use 'mdadm --add /dev/md_device /dev/partition_device' to add back the missing partitions, being careful to match up the right device names for the raid sets and partitions. This will start a sync with the newly added and you can watch the progress with 'cat /proc/mdstat'. I'm not sure exactly what conditions trigger an automatic sync and what requires the manual one but if the system goes down cleanly they will pair up at reboot as long as the partition type is FD. -- Les Mikesell lesmikesell at gmail.com
On Fri, Jul 3, 2009 at 9:15 PM, Steven Vishoot<sir_funzone at yahoo.com> wrote:> > hello all, > > I have a setup that is raid 1 and put the mirrored drive back in and now it is still showing as degraded saying: raid1: raid set md6 active with 1 out of 2 mirrors with this message on all the raids.i know i am wrong by saying this but i thought putting in the driving and rebooting would start the re syncing itself. what do i have to do to add this back in, i am so confused with this process. > > centos 4.x > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos >Well, just putting a new disk drive in the place of the bad one doesn't cut it. You have to recreate partition table and then add the partitions to the drive. e.g. sda = the old disk in the raid that has not failed sdb = the newly added disk dd if=/dev/sda of=/dev/sdb bs=512 count=1 That'll replicate the partition table and mbr to the new disk. Then starting addinf the new partitions to the linux raid: mdadm -a /dev/sdb1 /dev/md0 and so on, depending what your setup is. Do a: cat /proc/mdstat and see what partitions are added to which raid. Alternatively to a google search for howtos (e.g. http://www.gagme.com/greg/linux/raid-lvm.php ) and learn how to manage linux raid so you dont fsck up your system.