Hi I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output: [root at server admin]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hdc2[1] hda2[0] 1052160 blocks [2/2] [UU] md1 : active raid1 hda3[0] 77023552 blocks [2/1] [U_] md0 : active raid1 hdc1[1] hda1[0] 104320 blocks [2/2] [UU] What happens with md1 ? My dmesg output is: [root at server admin]# dmesg | grep md1 Kernel command line: ro root=/dev/md1 rhgb quiet md: created md1 raid1: raid set md1 active with 1 out of 2 mirrors md: md1 already running, cannot run hdc3 md: md1 already running, cannot run hdc3 EXT3-fs: md1: orphan cleanup on readonly fs EXT3-fs: md1: 4 orphan inodes deleted md: md1 already running, cannot run hdc3 EXT3 FS on md1, internal journal [root at server admin]# Thanks for any help !!! roberto -- Ing. Roberto Pereyra ContenidosOnline http://www.contenidosonline.com.ar
On Wed, April 25, 2007 11:54 am, Roberto Pereyra wrote:> Hi > > I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output: > > [root at server admin]# cat /proc/mdstat > Personalities : [raid1] > md2 : active raid1 hdc2[1] hda2[0] > 1052160 blocks [2/2] [UU] > > md1 : active raid1 hda3[0] > 77023552 blocks [2/1] [U_] > > md0 : active raid1 hdc1[1] hda1[0] > 104320 blocks [2/2] [UU] > > > What happens with md1 ?hdc3 appears to be the problem. Try using mdadm (man mdadm) query mode to find out more info about md1 and /dev/hdc3. Are you sure /dev/hdc3 is set to be linux raid autodetect (use fdisk to find out). You can also try adding hdc3 to the array with mdadm - see the man page for details. cheers; Alex ===> > > My dmesg output is: > > [root at server admin]# dmesg | grep md1 > > Kernel command line: ro root=/dev/md1 rhgb quiet > md: created md1 > raid1: raid set md1 active with 1 out of 2 mirrors > md: md1 already running, cannot run hdc3 > md: md1 already running, cannot run hdc3 > EXT3-fs: md1: orphan cleanup on readonly fs > EXT3-fs: md1: 4 orphan inodes deleted > md: md1 already running, cannot run hdc3 > EXT3 FS on md1, internal journal > [root at server admin]# > > > Thanks for any help !!! > > roberto > > > -- > Ing. Roberto Pereyra > ContenidosOnline > http://www.contenidosonline.com.ar > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos > > -- > This message has been scanned for viruses and > dangerous content by Avantel Systems, and is > believed to be clean. >-- This message has been scanned for viruses and dangerous content by Avantel Systems, and is believed to be clean.
Hi Roberto. The raid1 md1 is damaged. The partition hdc3 has been discarded by the system, that now only works with one of the two partitions of the mirror (hda3). You can have more details with: mdadm --detail /dev/md1 Sorry, my English is not good The partition md1 is RAID-1 with hda3 and hdc3 but hdc3 is wrong and the system is running with hda3 Roberto Pereyra escribi?:> Hi > > I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output: > > [root at server admin]# cat /proc/mdstat > Personalities : [raid1] > md2 : active raid1 hdc2[1] hda2[0] > 1052160 blocks [2/2] [UU] > > md1 : active raid1 hda3[0] > 77023552 blocks [2/1] [U_] > > md0 : active raid1 hdc1[1] hda1[0] > 104320 blocks [2/2] [UU] > > > What happens with md1 ? > > > My dmesg output is: > > [root at server admin]# dmesg | grep md1 > > Kernel command line: ro root=/dev/md1 rhgb quiet > md: created md1 > raid1: raid set md1 active with 1 out of 2 mirrors > md: md1 already running, cannot run hdc3 > md: md1 already running, cannot run hdc3 > EXT3-fs: md1: orphan cleanup on readonly fs > EXT3-fs: md1: 4 orphan inodes deleted > md: md1 already running, cannot run hdc3 > EXT3 FS on md1, internal journal > [root at server admin]# > > > Thanks for any help !!! > > roberto > >