Matthew C Aycock
2006-Dec-12 18:13 UTC
[zfs-discuss] SunCluster HA-NFS from Sol9/VxVM to Sol10u3/ZFS
We are currently working on a plan to upgrade our HA-NFS cluster that uses HA-StoragePlus and VxVM 3.2 on Solaris 9 to Solaris 10 and ZFS. Is there a known procedure or best practice for this? I have enough free disk space to recreate all the filesystems and copy the data if necessary, but would like to avoid copying if possible. Also, I am considering what type of zpools to create. I have a SAN with T3Bs and SE3511s. Since neither of these can work as a JBOD (at lesat that is what I remember) I guess I am going to have to add in the LUNS in a mirrored zpool of the Raid-5 Luns? We are at the extreme start of this project and I was hoping for some guidance as to what direction to start. This message posted from opensolaris.org
Robert Milkowski
2006-Dec-12 18:40 UTC
[zfs-discuss] SunCluster HA-NFS from Sol9/VxVM to Sol10u3/ZFS
Hello Matthew, Tuesday, December 12, 2006, 7:13:47 PM, you wrote: MCA> We are currently working on a plan to upgrade our HA-NFS cluster MCA> that uses HA-StoragePlus and VxVM 3.2 on Solaris 9 to Solaris 10 MCA> and ZFS. Is there a known procedure or best practice for this? I MCA> have enough free disk space to recreate all the filesystems and MCA> copy the data if necessary, but would like to avoid copying if possible. You will have to copy data. Also keep in mind that ZFS is supported in Sun Cluster 3.2 which is not out yet (should be really soon now). MCA> Also, I am considering what type of zpools to create. I have a MCA> SAN with T3Bs and SE3511s. Since neither of these can work as a MCA> JBOD (at lesat that is what I remember) I guess I am going to MCA> have to add in the LUNS in a mirrored zpool of the Raid-5 Luns? 1. those boxes can work a JBODs but not in a clustered environment. 2. the configurations of arrays - well, it depends. I would only suggest to do redundancy at zfs level at least. For some performance numbers on those arrays with zfs see the list archives. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Richard Elling
2006-Dec-12 19:40 UTC
[zfs-discuss] SunCluster HA-NFS from Sol9/VxVM to Sol10u3/ZFS
Matthew C Aycock wrote:> We are currently working on a plan to upgrade our HA-NFS cluster that > uses HA-StoragePlus and VxVM 3.2 on Solaris 9 to Solaris 10 and ZFS. Is > there a known procedure or best practice for this? I have enough free disk > space to recreate all the filesystems and copy the data if necessary, but > would like to avoid copying if possible.You will need to copy the data from the old file system into ZFS.> Also, I am considering what type of zpools to create. I have a SAN with > T3Bs and SE3511s. Since neither of these can work as a JBOD (at lesat that > is what I remember) I guess I am going to have to add in the LUNS in a > mirrored zpool of the Raid-5 Luns?Lacking other information, particularly performance requirements, what you suggest is a good strategy: ZFS mirrors of RAID-5 LUNs.> We are at the extreme start of this project and I was hoping for some > guidance as to what direction to start.By all means, read the Sun Cluster Concepts Guide first. It will answer many questions that may arise as you go through the design. Note version 3.2 which is required for ZFS has updates to the concepts guide regarding the use of ZFS, available RSN. -- richard
Torrey McMahon
2006-Dec-12 22:40 UTC
[zfs-discuss] SunCluster HA-NFS from Sol9/VxVM to Sol10u3/ZFS
Robert Milkowski wrote:> Hello Matthew, > > > MCA> Also, I am considering what type of zpools to create. I have a > MCA> SAN with T3Bs and SE3511s. Since neither of these can work as a > MCA> JBOD (at lesat that is what I remember) I guess I am going to > MCA> have to add in the LUNS in a mirrored zpool of the Raid-5 Luns? > > 1. those boxes can work a JBODs but not in a clustered environment.Actually, those boxes can''t act as JBODs. They only present LUNs created from the drives in the enclosures.
Robert Milkowski
2006-Dec-13 11:25 UTC
[zfs-discuss] SunCluster HA-NFS from Sol9/VxVM to Sol10u3/ZFS
Hello Torrey, Tuesday, December 12, 2006, 11:40:42 PM, you wrote: TM> Robert Milkowski wrote:>> Hello Matthew, >> >> >> MCA> Also, I am considering what type of zpools to create. I have a >> MCA> SAN with T3Bs and SE3511s. Since neither of these can work as a >> MCA> JBOD (at lesat that is what I remember) I guess I am going to >> MCA> have to add in the LUNS in a mirrored zpool of the Raid-5 Luns? >> >> 1. those boxes can work a JBODs but not in a clustered environment.TM> Actually, those boxes can''t act as JBODs. They only present LUNs created TM> from the drives in the enclosures. Well, 3510 is even supported as JBOD by Sun. The only limitation is to use only one FC link. I have tried both 3510 and 3511 as JBODS - 3510 works fins, with 3511 I had some problems under higher load. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Torrey McMahon
2006-Dec-13 18:31 UTC
[zfs-discuss] SunCluster HA-NFS from Sol9/VxVM to Sol10u3/ZFS
Robert Milkowski wrote:> Hello Torrey, > > Tuesday, December 12, 2006, 11:40:42 PM, you wrote: > > TM> Robert Milkowski wrote: > >>> Hello Matthew, >>> >>> >>> MCA> Also, I am considering what type of zpools to create. I have a >>> MCA> SAN with T3Bs and SE3511s. Since neither of these can work as a >>> MCA> JBOD (at lesat that is what I remember) I guess I am going to >>> MCA> have to add in the LUNS in a mirrored zpool of the Raid-5 Luns? >>> >>> 1. those boxes can work a JBODs but not in a clustered environment. >>> > > > TM> Actually, those boxes can''t act as JBODs. They only present LUNs created > TM> from the drives in the enclosures. > > Well, 3510 is even supported as JBOD by Sun. The only limitation is to > use only one FC link. > > I have tried both 3510 and 3511 as JBODS - 3510 works fins, with 3511 > I had some problems under higher load.3510 JBOD, sure. 3511 as a JBOD is a different beast. That one goes to 11. ;) 3510 JBOD with multipathing was supported if you used VxVM from what I recall.
> Well, 3510 is even supported as JBOD by Sun. The only > limitation is to > use only one FC link. > > I have tried both 3510 and 3511 as JBODS - 3510 works > fins, with 3511 > I had some problems under higher load. > > -- > Best regards, > Robert >Hi Robert, I saw in your post that you had problems with 3510JBOD and multipath. We are building a poor-man filer with 2xDL360 and 4x3510JBOD and had no problems so far. Multipath is working fine for us .. Can you tell me what you found? tnx, Gino This message posted from opensolaris.org
I am trying to bringup a 3510 JBOD on Solaris 10 and would like to enable multipathing. I have connected both ports on a dual-port HBA to two loops (FC0 and FC5). This is a X4100 running Solaris 10. When I run the format command I only see 12 drives - I was expecting that when 3510 FC JBOD array is connected to a host over two loops, it should have seen 24 drives (two entries for each drive). What am I missing ? Thanks. This message posted from opensolaris.org
On Wed, May 21, 2008 at 9:54 AM, Krutibas Biswal <kbiswal at sun.com> wrote:> I am trying to bringup a 3510 JBOD on Solaris 10 and would like to enable > multipathing. I have connected both ports on a dual-port HBA to two loops > (FC0 and FC5). This is a X4100 running Solaris 10. When I run the format > command I only see 12 drives - I was expecting that when > 3510 FC JBOD array is connected to a host over two loops, it should have > seen 24 drives (two entries for each drive). > > What am I missing ?Unlike sparc, mpxio is enabled by default on x86. Are you already multipathed? -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
On x64 Solaris 10, the default setting of mpxio was : mpxio-disable="no"; I changed it to mpxio-disable="yes"; and rebooted the machine and it detected 24 drives. Thanks, Krutibas Peter Tribble wrote:> On Wed, May 21, 2008 at 9:54 AM, Krutibas Biswal <kbiswal at sun.com> wrote: >> I am trying to bringup a 3510 JBOD on Solaris 10 and would like to enable >> multipathing. I have connected both ports on a dual-port HBA to two loops >> (FC0 and FC5). This is a X4100 running Solaris 10. When I run the format >> command I only see 12 drives - I was expecting that when >> 3510 FC JBOD array is connected to a host over two loops, it should have >> seen 24 drives (two entries for each drive). >> >> What am I missing ? > > Unlike sparc, mpxio is enabled by default on x86. Are you already > multipathed? >
Hello Krutibas, Wednesday, May 21, 2008, 10:43:03 AM, you wrote: KB> On x64 Solaris 10, the default setting of mpxio was : KB> mpxio-disable="no"; KB> I changed it to KB> mpxio-disable="yes"; KB> and rebooted the machine and it detected 24 drives. Originally you wanted to get it multipathed which was the case by default. Now you have disabled it (well, you still have to paths but no automatic failover). -- Best regards, Robert mailto:milek at task.gda.pl http://milek.blogspot.com
Robert Milkowski wrote:> Hello Krutibas, > > Wednesday, May 21, 2008, 10:43:03 AM, you wrote: > > KB> On x64 Solaris 10, the default setting of mpxio was : > > KB> mpxio-disable="no"; > > KB> I changed it to > > KB> mpxio-disable="yes"; > > KB> and rebooted the machine and it detected 24 drives. > > Originally you wanted to get it multipathed which was the case by > default. Now you have disabled it (well, you still have to paths but > no automatic failover). >Thanks. Can somebody point me to some documentation on this ? I wanted to see 24 drives so that I can use load sharing between two controllers (C1Disk1, C2Disk2, C1Disk3, C2Disk4...) for performance. If I enable multipathing, would the drive do automatic load balancing (sharing) between the two controllers ? Thanks, Krutibas
On Wed, May 21, 2008 at 10:55 PM, Krutibas Biswal <Krutibas.Biswal at sun.com> wrote:> Robert Milkowski wrote: >> Originally you wanted to get it multipathed which was the case by >> default. Now you have disabled it (well, you still have to paths but >> no automatic failover). >> > Thanks. Can somebody point me to some documentation on this ? > I wanted to see 24 drives so that I can use load sharing between > two controllers (C1Disk1, C2Disk2, C1Disk3, C2Disk4...) for > performance.You don''t need to see the 24 drives. The multipathing drivers hides them to present a single drive despite both paths. The layer above (volume manager or ZFS in this case) uses this as a multipathed drive.> If I enable multipathing, would the drive do automatic load balancing > (sharing) between the two controllers ?Yes. that''s exactly what multipathing does. -- Just me, Wire ... Blog: <prstat.blogspot.com>
Krutibas> On x64 Solaris 10, the default setting of mpxio was : > > mpxio-disable="no"; > > I changed it to > > mpxio-disable="yes"; > > and rebooted the machine and it detected 24 drives. >...you have just *disabled* Solairs scsi_vhci(7d) multi-pathing. You should go back to ''mpxio-disable="no";'' and look at ''prtconf -v /dev/rdsk/XXX'' - you should see the multiple paths ''virtualized'' below a single /devices/scsi_vhci disk node. For symetric devices the scsi_vhci code takes care of spreading the load over available paths. See scsi_vhci(7d), and mpathadm(1M). http://www.opensolaris.org/os/project/mpxio/ -Chris> Thanks, > Krutibas > > > Peter Tribble wrote: > >> On Wed, May 21, 2008 at 9:54 AM, Krutibas Biswal <kbiswal at sun.com> wrote: >> >>> I am trying to bringup a 3510 JBOD on Solaris 10 and would like to enable >>> multipathing. I have connected both ports on a dual-port HBA to two loops >>> (FC0 and FC5). This is a X4100 running Solaris 10. When I run the format >>> command I only see 12 drives - I was expecting that when >>> 3510 FC JBOD array is connected to a host over two loops, it should have >>> seen 24 drives (two entries for each drive). >>> >>> What am I missing ? >>> >> Unlike sparc, mpxio is enabled by default on x86. Are you already >> multipathed? >> >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On Wed, 21 May 2008, Krutibas Biswal wrote:>> > Thanks. Can somebody point me to some documentation on this ? > I wanted to see 24 drives so that I can use load sharing between > two controllers (C1Disk1, C2Disk2, C1Disk3, C2Disk4...) for > performance. > > If I enable multipathing, would the drive do automatic load balancing > (sharing) between the two controllers ?This depends on the capability of the storage hardware and the model for it that the multipathing software uses. For example, the StorageTek 2540 (which I use) does not have load-share support in hardware so the multipathing operates as active/standby. In order to help offset this, the 12 drives in the 2540 are split across its two controllers so that 6 drives are active on one controller, and the other 6 drives are active on the second controller. With some care (e.g. mirror drive pairing), the traffic for one set of drives is sent down one path, while the traffic for the second set of drives is sent down the other path. If a path or controller fails, then the other controller and FC path takes charge and the multipathing software should direct all traffic down that one path. For JBOD, this approach provides the performance benefits of load-share until there is a path failure. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
The Solaris SAN Configuration and Multipathing Guide proved very helpful for me: http://docs.sun.com/app/docs/doc/820-1931/ I, too was surprised to see MPIO enabled by default on x86 (we''re using Dell/EMC CX3-40 with our X4500 & X6250 systems). Charles Quoting Krutibas Biswal <Krutibas.Biswal at Sun.COM>:> Robert Milkowski wrote: > > Hello Krutibas, > > > > Wednesday, May 21, 2008, 10:43:03 AM, you wrote: > > > > KB> On x64 Solaris 10, the default setting of mpxio was : > > > > KB> mpxio-disable="no"; > > > > KB> I changed it to > > > > KB> mpxio-disable="yes"; > > > > KB> and rebooted the machine and it detected 24 drives. > > > > Originally you wanted to get it multipathed which was the case by > > default. Now you have disabled it (well, you still have to paths but > > no automatic failover). > > > Thanks. Can somebody point me to some documentation on this ? > I wanted to see 24 drives so that I can use load sharing between > two controllers (C1Disk1, C2Disk2, C1Disk3, C2Disk4...) for > performance. > > If I enable multipathing, would the drive do automatic load balancing > (sharing) between the two controllers ? > > Thanks, > Krutibas