Hi, I understand that ZFS leaves multipathing to MPXIO or the like. For a combination of dual-path A5200 with QLGC2100 HBAs (non-leadville stack), how would ZFS react in seeing this ? WL
Try it and let us know. :) Seriously - Ya got me. It, along with the rest of the stack, would see multiple drives. However, I''m not sure how the ZFS pool id, detection, and the like would come into play. Hong Wei Liam wrote:> Hi, > > I understand that ZFS leaves multipathing to MPXIO or the like. For a > combination of dual-path A5200 with QLGC2100 HBAs (non-leadville > stack), how would ZFS react in seeing this ?
I don''t have any problems with a dual attached 3511 using qle2462 cards and mpxio. Not sure if that was the question or not. :-) On October 19, 2006 3:46:55 PM -0400 Torrey McMahon <Torrey.McMahon at Sun.COM> wrote:> Try it and let us know. :) > > Seriously - Ya got me. It, along with the rest of the stack, would see > multiple drives. However, I''m not sure how the ZFS pool id, detection, > and the like would come into play. > > Hong Wei Liam wrote: >> Hi, >> >> I understand that ZFS leaves multipathing to MPXIO or the like. For a >> combination of dual-path A5200 with QLGC2100 HBAs (non-leadville >> stack), how would ZFS react in seeing this ? > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Do you have any multipathing enabled? Frank Cusack wrote:> I don''t have any problems with a dual attached 3511 using qle2462 cards > and mpxio. Not sure if that was the question or not. :-) > > On October 19, 2006 3:46:55 PM -0400 Torrey McMahon > <Torrey.McMahon at Sun.COM> wrote: >> Try it and let us know. :) >> >> Seriously - Ya got me. It, along with the rest of the stack, would see >> multiple drives. However, I''m not sure how the ZFS pool id, detection, >> and the like would come into play. >> >> Hong Wei Liam wrote: >>> Hi, >>> >>> I understand that ZFS leaves multipathing to MPXIO or the like. For a >>> combination of dual-path A5200 with QLGC2100 HBAs (non-leadville >>> stack), how would ZFS react in seeing this ? >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > > > >
On October 19, 2006 4:26:59 PM -0400 Torrey McMahon <Torrey.McMahon at Sun.COM> wrote:> Do you have any multipathing enabled?rewind ...> > Frank Cusack wrote: >> I don''t have any problems with a dual attached 3511 using qle2462 cards >> and mpxio. Not sure if that was the question or not. :-)yes, mpxio. (and it is enabled) $ zpool status pool: zfs1 state: ONLINE scrub: scrub completed with 0 errors on Thu Oct 19 11:11:03 2006 config: NAME STATE READ WRITE CKSUM zfs1 ONLINE 0 0 0 c1t22E4000A33001463d0 ONLINE 0 0 0 errors: No known data errors $ For some reason, ''stmsboot -L'' doesn''t report anything, but maybe that''s a boot-time only thing? (No boot drives on this array.) Anyway, ''format'' shows the mpxio names. -frank>> On October 19, 2006 3:46:55 PM -0400 Torrey McMahon >> <Torrey.McMahon at Sun.COM> wrote: >>> Try it and let us know. :) >>> >>> Seriously - Ya got me. It, along with the rest of the stack, would see >>> multiple drives. However, I''m not sure how the ZFS pool id, detection, >>> and the like would come into play. >>> >>> Hong Wei Liam wrote: >>>> Hi, >>>> >>>> I understand that ZFS leaves multipathing to MPXIO or the like. For a >>>> combination of dual-path A5200 with QLGC2100 HBAs (non-leadville >>>> stack), how would ZFS react in seeing this ? >>> >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >> >> >> >> > >
Hello Frank, Thursday, October 19, 2006, 10:44:04 PM, you wrote: FC> On October 19, 2006 4:26:59 PM -0400 Torrey McMahon FC> <Torrey.McMahon at Sun.COM> wrote:>> Do you have any multipathing enabled?FC> rewind ...>> >> Frank Cusack wrote: >>> I don''t have any problems with a dual attached 3511 using qle2462 cards >>> and mpxio. Not sure if that was the question or not. :-)FC> yes, mpxio. FC> (and it is enabled) FC> $ zpool status FC> pool: zfs1 FC> state: ONLINE FC> scrub: scrub completed with 0 errors on Thu Oct 19 11:11:03 2006 FC> config: FC> NAME STATE READ WRITE CKSUM FC> zfs1 ONLINE 0 0 0 FC> c1t22E4000A33001463d0 ONLINE 0 0 0 FC> errors: No known data errors FC> $ This of course does work. I guess the real question was what will happen if you now export your pool, then disable mpxio so you will see the same disk at least twice and now you decide to import that pool. Would it confuse ZFS? FC> For some reason, ''stmsboot -L'' doesn''t report anything, but maybe FC> that''s a boot-time only thing? (No boot drives on this array.) Probably you run devfsadm -C so there''re no symlinks for old device names or maybe you created LUN''s after you enabled MPxIO so symlinks for non-mpxio devices were actually never created. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Robert Milkowski wrote:> This of course does work. I guess the real question was what will > happen if you now export your pool, then disable mpxio so you will see > the same disk at least twice and now you decide to import that pool. > Would it confuse ZFS? >That''s what I was getting at. The original question said, "I can''t run mpxio". An even messier scenario: You try to create a pool on the second path when the pool is already loaded. I think the inuse disk checking comes into play as well.> > FC> For some reason, ''stmsboot -L'' doesn''t report anything, but maybe > FC> that''s a boot-time only thing? (No boot drives on this array.) > > Probably you run devfsadm -C so there''re no symlinks for old device > names or maybe you created LUN''s after you enabled MPxIO so symlinks > for non-mpxio devices were actually never created.Yeah...that should be in the man page or docs.
On October 20, 2006 1:08:30 AM +0200 Robert Milkowski <rmilkowski at task.gda.pl> wrote:> This of course does work. I guess the real question was what will > happen if you now export your pool, then disable mpxio so you will see > the same disk at least twice and now you decide to import that pool. > Would it confuse ZFS?Ah, well that I''m not willing to try. On purpose. :-)> FC> For some reason, ''stmsboot -L'' doesn''t report anything, but maybe > FC> that''s a boot-time only thing? (No boot drives on this array.) > > Probably you run devfsadm -C so there''re no symlinks for old device > names or maybe you created LUN''s after you enabled MPxIO so symlinks > for non-mpxio devices were actually never created.Yep, I ran devfsadm -C. thanks -frank
Isn''t this in a FAQ somewhere? IIRC, if ZFS finds a disk via two paths, then it will pick one. -- richard Torrey McMahon wrote:> Robert Milkowski wrote: >> This of course does work. I guess the real question was what will >> happen if you now export your pool, then disable mpxio so you will see >> the same disk at least twice and now you decide to import that pool. >> Would it confuse ZFS? >> > > > That''s what I was getting at. The original question said, "I can''t run > mpxio". > > An even messier scenario: You try to create a pool on the second path > when the pool is already loaded. I think the inuse disk checking comes > into play as well. > >> >> FC> For some reason, ''stmsboot -L'' doesn''t report anything, but maybe >> FC> that''s a boot-time only thing? (No boot drives on this array.) >> >> Probably you run devfsadm -C so there''re no symlinks for old device >> names or maybe you created LUN''s after you enabled MPxIO so symlinks >> for non-mpxio devices were actually never created. > > > Yeah...that should be in the man page or docs. > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The next natural question is Richard Elling - PAE wrote:> Isn''t this in a FAQ somewhere? IIRC, if ZFS finds a disk via two paths, > then it will pick one.Will it (try to) failover to another one if picked one fails? Wbr, Victor> -- richard > > Torrey McMahon wrote: >> Robert Milkowski wrote: >>> This of course does work. I guess the real question was what will >>> happen if you now export your pool, then disable mpxio so you will see >>> the same disk at least twice and now you decide to import that pool. >>> Would it confuse ZFS? >>> >> >> >> That''s what I was getting at. The original question said, "I can''t run >> mpxio". >> >> An even messier scenario: You try to create a pool on the second path >> when the pool is already loaded. I think the inuse disk checking comes >> into play as well. >> >>> >>> FC> For some reason, ''stmsboot -L'' doesn''t report anything, but maybe >>> FC> that''s a boot-time only thing? (No boot drives on this array.) >>> >>> Probably you run devfsadm -C so there''re no symlinks for old device >>> names or maybe you created LUN''s after you enabled MPxIO so symlinks >>> for non-mpxio devices were actually never created. >> >> >> Yeah...that should be in the man page or docs. >> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, 2006-10-19 at 08:09 +0800, Hong Wei Liam wrote:> I understand that ZFS leaves multipathing to MPXIO or the like. For a > combination of dual-path A5200 with QLGC2100 HBAs ...I''d actually not bother with multipathing A5200''s for ZFS I have a pair of A5200''s which I''m using with zfs. I set up the beasts in split-loop mode, and built the pool from 4-disk raid-z groups with one disk from each loop in each group... - Bill
Victor Latushkin wrote:> The next natural question is > > Richard Elling - PAE wrote: >> Isn''t this in a FAQ somewhere? IIRC, if ZFS finds a disk via two paths, >> then it will pick one. > Will it (try to) failover to another one if picked one fails?No, not automatically. MPXIO provides automatic path failover. -- richard