Hi all, I have setup a raid1 between two iscsi disks and mdadm command goes well. Problems starts when i rebooted the server (CentOS 5.4 fully updated): raid is lost, and i don't understand why. "mdadm --detail --scan" doesn't returns me any output. "mdadm --examine --scan" returns me: ARRAY /dev/md0 level=raid1 num-devices=2 UUID=9295e5a2:b28d4fbd:b61fed29:f232ebfe and it's ok. My mdadm.conf is: DEVICE partitions ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=9295e5a2:b28d4fbd:b61fed29:f232ebfe devices=/dev/iopsda1,/dev/iopsdb1 MAILADDR root mdmonitor init script is activated. Why md0 is not activated when I reboot this server? How can I do this persistent between reboots?? Many thanks. -- CL Martinez carlopmart {at} gmail {d0t} com
On Wed, 2009-11-11 at 12:43 +0100, carlopmart wrote:> Hi all, > > I have setup a raid1 between two iscsi disks and mdadm command goes well. Problems > starts when i rebooted the server (CentOS 5.4 fully updated): raid is lost, and i > don't understand why.Just a thought, but are the raid partitions all marked as ID type fd (Linux raid autodetect) ? regards Brendan
carlopmart wrote:> Hi all, > > I have setup a raid1 between two iscsi disksWhy would you think to attempt this? iSCSI is slow enough as it is, layering RAID on top of it would be even worse. Run RAID on the remote iSCSI system and don't try to do RAID between two networked iSCSI volumes, it will hurt performance even more.> Why md0 is not activated when I reboot this server? How can I do this > persistent > between reboots??Probably because the iSCSI sessions are not established when the software raid stuff kicks in. You must manually start the RAID volume after the iSCSI sessions are established. The exception would be if you were using a hardware iSCSI HBA in which case the devices wouldn't show up as iSCSI volumes they would show up as SCSI volumes and be available immediately upon booting, as the HBA would handle session management. nate