Is it possible or will there be any problems with using mdraid on top of mdraid? specifically say mdraid 1/5 on top of mdraid multipath. e.g. 4 storage machines exporting iSCSI targets via two different physical network switches then use multipath to create md block devices then use mdraid on these md block devices The purpose being the storage array surviving a physical network switch failure. _______________________________________________ CentOS mailing list CentOS at centos.org http://lists.centos.org/mailman/listinfo/centos _______________________________________________ CentOS mailing list CentOS at centos.org http://lists.centos.org/mailman/listinfo/centos
Is it possible or will there be any problems with using mdraid on top of mdraid? specifically say mdraid 1/5 on top of mdraid multipath. e.g. 4 storage machines exporting iSCSI targets via two different physical network switches then use multipath to create md block devices then use mdraid on these md block devices The purpose being the raid storage array surviving a physical network switch failure. In case of duplicates, for some reason the first post isn't showing up on the mailing list so I'm resending it. _______________________________________________ CentOS mailing list CentOS at centos.org http://lists.centos.org/mailman/listinfo/centos _______________________________________________ CentOS mailing list CentOS at centos.org http://lists.centos.org/mailman/listinfo/centos
> Since your exporting the storage as iSCSI, the host machine will see > it as a raw disk, irrespective of how it's setup on the exported > server. So, yes, you can do this.Sorry for my ambiguity, I meant that mdadm would be on the host machine. e.g. Using just a 2 node, raid 1 situation Storage 1 -> Disk exported on 192.168.1.10, 192.168.2.10 Storage 2 -> Disk exported on 192.168.1.20, 192.168.2.20 Host -> mdadm multipath md0 = 192.168.1.10, 192.168.2.10 -> mdadm multipath md1 = 192.168.1.20, 192.168.2.20 ---> mdadm raid 1 md2 using md0 md1 _______________________________________________ CentOS mailing list CentOS at centos.org http://lists.centos.org/mailman/listinfo/centos
On Mon, Mar 21, 2011 at 14:51, Emmanuel Noobadmin <centos.admin at gmail.com> wrote:> Is it possible or will there be any problems with using mdraid on top of mdraid? > > specifically say > mdraid 1/5 on top of mdraid multipath. > > e.g. 4 storage machines exporting iSCSI targets via two different > physical network switches > > then use multipath to create md block devices > > then use mdraid on these md block devices > > The purpose being the raid storage array surviving a physical network > switch failure.Multipath should survive ONE switch failure and your storage should provide a fast raid implementation. You'll have really poor performance if you raid over iscsi... -- Marcelo "?No ser? acaso que ?sta vida moderna est? teniendo m?s de moderna que de vida?" (Mafalda)
On Mar 21, 2011, at 3:21 PM, Emmanuel Noobadmin <centos.admin at gmail.com> wrote:>> Since your exporting the storage as iSCSI, the host machine will see >> it as a raw disk, irrespective of how it's setup on the exported >> server. So, yes, you can do this. > > Sorry for my ambiguity, I meant that mdadm would be on the host machine. > > e.g. Using just a 2 node, raid 1 situation > Storage 1 > -> Disk exported on 192.168.1.10, 192.168.2.10 > > Storage 2 > -> Disk exported on 192.168.1.20, 192.168.2.20 > > Host > -> mdadm multipath md0 = 192.168.1.10, 192.168.2.10 > -> mdadm multipath md1 = 192.168.1.20, 192.168.2.20 > ---> mdadm raid 1 md2 using md0 md1Yes this will work, dm-multipath and mdraid use different subsystems. Configure open-iscsi to log into the two targets with two sessions each, setup the dm-multipath to arrange the sessions as either a round-robin or fail-over, then create a mdraid RAID1 out of the two multipath targets (use their multipath identities not their raw identities). There may be timing issues on system shutdown/startup, so test that fully and tweak it until it starts up properly without requiring a resync. -Ross