How much fun can you have with a simple thing like powerpath? Here''s the story: I have a (remote) system with access to a couple of EMC LUNs. Originally, I set it up with mpxio and created a simple zpool containing the two LUNs. It''s now been reconfigured to use powerpath instead of mpxio. My problem is that I can''t import the pool. I get: pool: ###### id: ################### state: FAULTED status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://www.sun.com/msg/ZFS-8000-3C config: disk00 UNAVAIL insufficient replicas c3t50060xxxxxxxxxxCd1 ONLINE c3t50060xxxxxxxxxxCd0 UNAVAIL cannot open Now, it''s working up to the point at which it''s worked out that the bits of the pool are in the right places. It just can''t open all the bits. Why is that? I notice that it''s using the underlying cXtXdX device names rather than the virtual emcpower{0,1} names. However, rather more worrying is that if I try to create a new pool, then it correctly fails if I use the cXtXdX device (warning me that it contains part of a pool) but if I go through the emcpower devices then I don''t get a warning. (One other snippet - the cXtXdX device nodes look slightly odd, in that some of them look like the traditional SMI labelled nodes, while some are more in an EFI style with a device node for the disk.) Is there any way to fix this or are we going to have to start over? If we do start over, is powerpath going to behave itself or might this sort of issue bite us again in the future? Thanks for any help or suggestions from any powerpath experts. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
Can you post a "powermt display dev=all", a zpool status and format command? zfs-discuss-bounces at opensolaris.org wrote on 07/13/2007 09:38:01 AM:> How much fun can you have with a simple thing like powerpath? > > Here''s the story: I have a (remote) system with access to a couple > of EMC LUNs. Originally, I set it up with mpxio and created a simple > zpool containing the two LUNs. > > It''s now been reconfigured to use powerpath instead of mpxio. > > My problem is that I can''t import the pool. I get: > > pool: ###### > id: ################### > state: FAULTED > status: One or more devices are missing from the system. > action: The pool cannot be imported. Attach the missing > devices and try again. > see: http://www.sun.com/msg/ZFS-8000-3C > config: > > disk00 UNAVAIL insufficient replicas > c3t50060xxxxxxxxxxCd1 ONLINE > c3t50060xxxxxxxxxxCd0 UNAVAIL cannot open > > Now, it''s working up to the point at which it''s worked out that > the bits of the pool are in the right places. It just can''t open > all the bits. Why is that? > > I notice that it''s using the underlying cXtXdX device names > rather than the virtual emcpower{0,1} names. However, rather > more worrying is that if I try to create a new pool, then it correctly > fails if I use the cXtXdX device (warning me that it contains > part of a pool) but if I go through the emcpower devices > then I don''t get a warning. > > (One other snippet - the cXtXdX device nodes look > slightly odd, in that some of them look like the traditional > SMI labelled nodes, while some are more in an EFI style > with a device node for the disk.) > > Is there any way to fix this or are we going to have to > start over? > > If we do start over, is powerpath going to behave itself > or might this sort of issue bite us again in the future? > > Thanks for any help or suggestions from any > powerpath experts. > > -- > -Peter Tribble > http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 7/13/07, Wade.Stuart at fallon.com <Wade.Stuart at fallon.com> wrote:> > Can you post a "powermt display dev=all", a zpool status and format > command?Sure. There are no pools to give status on because I can''t import them. For the others: # powermt display dev=all Pseudo name=emcpower0a CLARiiON ID=APM00043600837 [########] Logical device ID=600601600C4912003AB4B247BA2BDA11 [LUN 46] state=alive; policy=CLAROpt; priority=0; queued-IOs=0 Owner: default=SP B, current=SP B =============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors =============================================================================3073 pci at 1c,600000/lpfc at 1/fp at 0,0 c2t500601613060099Cd1s0 SP A1 active alive 0 0 3073 pci at 1c,600000/lpfc at 1/fp at 0,0 c2t500601693060099Cd1s0 SP B1 active alive 0 0 3072 pci at 1d,700000/lpfc at 1/fp at 0,0 c3t500601603060099Cd1s0 SP A0 active alive 0 0 3072 pci at 1d,700000/lpfc at 1/fp at 0,0 c3t500601683060099Cd1s0 SP B0 active alive 0 0 Pseudo name=emcpower1a CLARiiON ID=APM00043600837 [########] Logical device ID=600601600C4912004C5CFDFFB62BDA11 [LUN 0] state=alive; policy=CLAROpt; priority=0; queued-IOs=0 Owner: default=SP A, current=SP A =============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors =============================================================================3073 pci at 1c,600000/lpfc at 1/fp at 0,0 c2t500601613060099Cd0s0 SP A1 active alive 0 0 3073 pci at 1c,600000/lpfc at 1/fp at 0,0 c2t500601693060099Cd0s0 SP B1 active alive 0 0 3072 pci at 1d,700000/lpfc at 1/fp at 0,0 c3t500601603060099Cd0s0 SP A0 active alive 0 0 3072 pci at 1d,700000/lpfc at 1/fp at 0,0 c3t500601683060099Cd0s0 SP B0 active alive 0 0 AVAILABLE DISK SELECTIONS: 0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci at 1f,700000/scsi at 2/sd at 0,0 1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci at 1f,700000/scsi at 2/sd at 1,0 2. c2t500601613060099Cd0 <DGC-RAID 5-0219-500.00GB> /pci at 1c,600000/lpfc at 1/fp at 0,0/ssd at w500601613060099c,0 3. c2t500601693060099Cd0 <DGC-RAID 5-0219-500.00GB> /pci at 1c,600000/lpfc at 1/fp at 0,0/ssd at w500601693060099c,0 4. c2t500601613060099Cd1 <DGC-RAID 5-0219-500.00GB> /pci at 1c,600000/lpfc at 1/fp at 0,0/ssd at w500601613060099c,1 5. c2t500601693060099Cd1 <DGC-RAID 5-0219-500.00GB> /pci at 1c,600000/lpfc at 1/fp at 0,0/ssd at w500601693060099c,1 6. c3t500601683060099Cd0 <DGC-RAID 5-0219-500.00GB> /pci at 1d,700000/lpfc at 1/fp at 0,0/ssd at w500601683060099c,0 7. c3t500601603060099Cd0 <DGC-RAID 5-0219-500.00GB> /pci at 1d,700000/lpfc at 1/fp at 0,0/ssd at w500601603060099c,0 8. c3t500601683060099Cd1 <DGC-RAID 5-0219-500.00GB> /pci at 1d,700000/lpfc at 1/fp at 0,0/ssd at w500601683060099c,1 9. c3t500601603060099Cd1 <DGC-RAID 5-0219-500.00GB> /pci at 1d,700000/lpfc at 1/fp at 0,0/ssd at w500601603060099c,1 10. emcpower0a <DGC-RAID 5-0219-500.00GB> /pseudo/emcp at 0 11. emcpower1a <DGC-RAID 5-0219-500.00GB> /pseudo/emcp at 1> > zfs-discuss-bounces at opensolaris.org wrote on 07/13/2007 09:38:01 AM: > > > How much fun can you have with a simple thing like powerpath? > > > > Here''s the story: I have a (remote) system with access to a couple > > of EMC LUNs. Originally, I set it up with mpxio and created a simple > > zpool containing the two LUNs. > > > > It''s now been reconfigured to use powerpath instead of mpxio. > > > > My problem is that I can''t import the pool. I get: > > > > pool: ###### > > id: ################### > > state: FAULTED > > status: One or more devices are missing from the system. > > action: The pool cannot be imported. Attach the missing > > devices and try again. > > see: http://www.sun.com/msg/ZFS-8000-3C > > config: > > > > disk00 UNAVAIL insufficient replicas > > c3t50060xxxxxxxxxxCd1 ONLINE > > c3t50060xxxxxxxxxxCd0 UNAVAIL cannot open > > > > Now, it''s working up to the point at which it''s worked out that > > the bits of the pool are in the right places. It just can''t open > > all the bits. Why is that? > > > > I notice that it''s using the underlying cXtXdX device names > > rather than the virtual emcpower{0,1} names. However, rather > > more worrying is that if I try to create a new pool, then it correctly > > fails if I use the cXtXdX device (warning me that it contains > > part of a pool) but if I go through the emcpower devices > > then I don''t get a warning. > > > > (One other snippet - the cXtXdX device nodes look > > slightly odd, in that some of them look like the traditional > > SMI labelled nodes, while some are more in an EFI style > > with a device node for the disk.) > > > > Is there any way to fix this or are we going to have to > > start over? > > > > If we do start over, is powerpath going to behave itself > > or might this sort of issue bite us again in the future? > > > > Thanks for any help or suggestions from any > > powerpath experts. > > > > -- > > -Peter Tribble > > http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
You wouldn''t happen to be running this on a SPARC would you? I started a thread last week regarding CLARiiON+ZFS+SPARC = core dump when creating a zpool. I filed a bug report, though it doesn''t appear to be in the database (not sure if that means it was rejected or I didn''t submit it correctly). Also, I was using the powerpath psuedo device not the WWN though. We had planned on opening a ticket with Sun but our DBA''s sufficiently put the kybosh on using ZFS on their systems when they caught wind of my problem, so basically I can no longer use that server to investigate the issue, and unfortunately I do not have any other available sparcs with SAN connectivity. -- Sean -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Peter Tribble Sent: Friday, July 13, 2007 11:18 AM To: Wade.Stuart at fallon.com Cc: zfs-discuss at opensolaris.org; zfs-discuss-bounces at opensolaris.org Subject: Re: [zfs-discuss] ZFS and powerpath On 7/13/07, Wade.Stuart at fallon.com <Wade.Stuart at fallon.com> wrote:> > Can you post a "powermt display dev=all", a zpool status and format > command?Sure. There are no pools to give status on because I can''t import them. For the others: # powermt display dev=all Pseudo name=emcpower0a CLARiiON ID=APM00043600837 [########] Logical device ID=600601600C4912003AB4B247BA2BDA11 [LUN 46] state=alive; policy=CLAROpt; priority=0; queued-IOs=0 Owner: default=SP B, current=SP B ============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================3073 pci at 1c,600000/lpfc at 1/fp at 0,0 c2t500601613060099Cd1s0 SP A1 active alive 0 0 3073 pci at 1c,600000/lpfc at 1/fp at 0,0 c2t500601693060099Cd1s0 SP B1 active alive 0 0 3072 pci at 1d,700000/lpfc at 1/fp at 0,0 c3t500601603060099Cd1s0 SP A0 active alive 0 0 3072 pci at 1d,700000/lpfc at 1/fp at 0,0 c3t500601683060099Cd1s0 SP B0 active alive 0 0 Pseudo name=emcpower1a CLARiiON ID=APM00043600837 [########] Logical device ID=600601600C4912004C5CFDFFB62BDA11 [LUN 0] state=alive; policy=CLAROpt; priority=0; queued-IOs=0 Owner: default=SP A, current=SP A ============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================3073 pci at 1c,600000/lpfc at 1/fp at 0,0 c2t500601613060099Cd0s0 SP A1 active alive 0 0 3073 pci at 1c,600000/lpfc at 1/fp at 0,0 c2t500601693060099Cd0s0 SP B1 active alive 0 0 3072 pci at 1d,700000/lpfc at 1/fp at 0,0 c3t500601603060099Cd0s0 SP A0 active alive 0 0 3072 pci at 1d,700000/lpfc at 1/fp at 0,0 c3t500601683060099Cd0s0 SP B0 active alive 0 0 AVAILABLE DISK SELECTIONS: 0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci at 1f,700000/scsi at 2/sd at 0,0 1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci at 1f,700000/scsi at 2/sd at 1,0 2. c2t500601613060099Cd0 <DGC-RAID 5-0219-500.00GB> /pci at 1c,600000/lpfc at 1/fp at 0,0/ssd at w500601613060099c,0 3. c2t500601693060099Cd0 <DGC-RAID 5-0219-500.00GB> /pci at 1c,600000/lpfc at 1/fp at 0,0/ssd at w500601693060099c,0 4. c2t500601613060099Cd1 <DGC-RAID 5-0219-500.00GB> /pci at 1c,600000/lpfc at 1/fp at 0,0/ssd at w500601613060099c,1 5. c2t500601693060099Cd1 <DGC-RAID 5-0219-500.00GB> /pci at 1c,600000/lpfc at 1/fp at 0,0/ssd at w500601693060099c,1 6. c3t500601683060099Cd0 <DGC-RAID 5-0219-500.00GB> /pci at 1d,700000/lpfc at 1/fp at 0,0/ssd at w500601683060099c,0 7. c3t500601603060099Cd0 <DGC-RAID 5-0219-500.00GB> /pci at 1d,700000/lpfc at 1/fp at 0,0/ssd at w500601603060099c,0 8. c3t500601683060099Cd1 <DGC-RAID 5-0219-500.00GB> /pci at 1d,700000/lpfc at 1/fp at 0,0/ssd at w500601683060099c,1 9. c3t500601603060099Cd1 <DGC-RAID 5-0219-500.00GB> /pci at 1d,700000/lpfc at 1/fp at 0,0/ssd at w500601603060099c,1 10. emcpower0a <DGC-RAID 5-0219-500.00GB> /pseudo/emcp at 0 11. emcpower1a <DGC-RAID 5-0219-500.00GB> /pseudo/emcp at 1> > zfs-discuss-bounces at opensolaris.org wrote on 07/13/2007 09:38:01 AM: > > > How much fun can you have with a simple thing like powerpath? > > > > Here''s the story: I have a (remote) system with access to a couple > > of EMC LUNs. Originally, I set it up with mpxio and created a simple> > zpool containing the two LUNs. > > > > It''s now been reconfigured to use powerpath instead of mpxio. > > > > My problem is that I can''t import the pool. I get: > > > > pool: ###### > > id: ################### > > state: FAULTED > > status: One or more devices are missing from the system. > > action: The pool cannot be imported. Attach the missing > > devices and try again. > > see: http://www.sun.com/msg/ZFS-8000-3C > > config: > > > > disk00 UNAVAIL insufficient replicas > > c3t50060xxxxxxxxxxCd1 ONLINE > > c3t50060xxxxxxxxxxCd0 UNAVAIL cannot open > > > > Now, it''s working up to the point at which it''s worked out that the > > bits of the pool are in the right places. It just can''t open all the> > bits. Why is that? > > > > I notice that it''s using the underlying cXtXdX device names rather > > than the virtual emcpower{0,1} names. However, rather more worrying > > is that if I try to create a new pool, then it correctly fails if I > > use the cXtXdX device (warning me that it contains part of a pool) > > but if I go through the emcpower devices then I don''t get a warning. > > > > (One other snippet - the cXtXdX device nodes look slightly odd, in > > that some of them look like the traditional SMI labelled nodes, > > while some are more in an EFI style with a device node for the > > disk.) > > > > Is there any way to fix this or are we going to have to start over? > > > > If we do start over, is powerpath going to behave itself or might > > this sort of issue bite us again in the future? > > > > Thanks for any help or suggestions from any powerpath experts. > > > > -- > > -Peter Tribble > > http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 7/13/07, Alderman, Sean <salderman at medplus.com> wrote:> You wouldn''t happen to be running this on a SPARC would you?That I would.> I started a thread last week regarding CLARiiON+ZFS+SPARC = core dump > when creating a zpool. I filed a bug report, though it doesn''t appear > to be in the database (not sure if that means it was rejected or I > didn''t submit it correctly).I''m not seeing it quite like that. ZFS+mpxio works, ZFS+powerpath seems to have "issues".> Also, I was using the powerpath psuedo device not the WWN though.I''ve not got that far. During an import, ZFS just pokes around - there doesn''t seem to be an explicit way to tell it which particular devices or SAN paths to use. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
There was a Sun Forums post that I referenced in that other thread that mentioned something about mpxio working but powerpath not working. Of course I don''t know how valid those statements are/were, and I don''t recall much detail given. -- Sean -----Original Message----- From: Peter Tribble [mailto:peter.tribble at gmail.com] Sent: Friday, July 13, 2007 11:53 AM To: Alderman, Sean Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] ZFS and powerpath On 7/13/07, Alderman, Sean <salderman at medplus.com> wrote:> You wouldn''t happen to be running this on a SPARC would you?That I would.> I started a thread last week regarding CLARiiON+ZFS+SPARC = core dump > when creating a zpool. I filed a bug report, though it doesn''t appear> to be in the database (not sure if that means it was rejected or I > didn''t submit it correctly).I''m not seeing it quite like that. ZFS+mpxio works, ZFS+powerpath seems to have "issues".> Also, I was using the powerpath psuedo device not the WWN though.I''ve not got that far. During an import, ZFS just pokes around - there doesn''t seem to be an explicit way to tell it which particular devices or SAN paths to use. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
Hmm. Odd. I''ve got PowerPath working fine with ZFS with both Symmetrix and Clariion back ends. PowerPath Version is 4.5.0, running on leadville qlogic drivers. Sparc hardware. (if it matters) I ran one our test databases on ZFS on the DMX via PowerPath for a couple months until we switched off of it because of the ''bogus memory usage'' statistics problem. We still use it on a server we use for logs processing and retention that uses the Clariion as a back end. cheers, Brian On Jul 13, 2007, at 11:08 AM, Alderman, Sean wrote:> There was a Sun Forums post that I referenced in that other thread > that > mentioned something about mpxio working but powerpath not working. Of > course I don''t know how valid those statements are/were, and I don''t > recall much detail given. > > > -- > Sean > > -----Original Message----- > From: Peter Tribble [mailto:peter.tribble at gmail.com] > Sent: Friday, July 13, 2007 11:53 AM > To: Alderman, Sean > Cc: zfs-discuss at opensolaris.org > Subject: Re: [zfs-discuss] ZFS and powerpath > > On 7/13/07, Alderman, Sean <salderman at medplus.com> wrote: >> You wouldn''t happen to be running this on a SPARC would you? > > That I would. > >> I started a thread last week regarding CLARiiON+ZFS+SPARC = core dump >> when creating a zpool. I filed a bug report, though it doesn''t >> appear > >> to be in the database (not sure if that means it was rejected or I >> didn''t submit it correctly). > > I''m not seeing it quite like that. ZFS+mpxio works, ZFS+powerpath > seems > to have "issues". > >> Also, I was using the powerpath psuedo device not the WWN though. > > I''ve not got that far. During an import, ZFS just pokes around - there > doesn''t seem to be an explicit way to tell it which particular devices > or SAN paths to use. > > -- > -Peter Tribble > http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2451 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070713/7b70d3eb/attachment.bin>
I wonder what kind of card Peter''s using and if there is a potential linkage there. We''ve got the Sun branded Emulux cards in our sparcs. I also wonder if Peter were able to allocate an additional LUN to his system whether or not he''d be able to create a pool on that new LUN. I''m not sure why exactly they were chosen over the qlogic, some of our admins swear by the qlogic cards, others have have had bad experiences with the qlogic cards not allowing for persistent binding on some configurations, but from my perspective being mostly a SAN noob it''s all hearsay. -- Sean M. Alderman 513.204.2704 -----Original Message----- From: Brian Wilson [mailto:bfwilson at doit.wisc.edu] Sent: Friday, July 13, 2007 1:58 PM To: Alderman, Sean Cc: Peter Tribble; zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] ZFS and powerpath Hmm. Odd. I''ve got PowerPath working fine with ZFS with both Symmetrix and Clariion back ends. PowerPath Version is 4.5.0, running on leadville qlogic drivers. Sparc hardware. (if it matters) I ran one our test databases on ZFS on the DMX via PowerPath for a couple months until we switched off of it because of the ''bogus memory usage'' statistics problem. We still use it on a server we use for logs processing and retention that uses the Clariion as a back end. cheers, Brian On Jul 13, 2007, at 11:08 AM, Alderman, Sean wrote:> There was a Sun Forums post that I referenced in that other thread > that mentioned something about mpxio working but powerpath not > working. Of course I don''t know how valid those statements are/were, > and I don''t recall much detail given. > > > -- > Sean > > -----Original Message----- > From: Peter Tribble [mailto:peter.tribble at gmail.com] > Sent: Friday, July 13, 2007 11:53 AM > To: Alderman, Sean > Cc: zfs-discuss at opensolaris.org > Subject: Re: [zfs-discuss] ZFS and powerpath > > On 7/13/07, Alderman, Sean <salderman at medplus.com> wrote: >> You wouldn''t happen to be running this on a SPARC would you? > > That I would. > >> I started a thread last week regarding CLARiiON+ZFS+SPARC = core dump>> when creating a zpool. I filed a bug report, though it doesn''t >> appear > >> to be in the database (not sure if that means it was rejected or I >> didn''t submit it correctly). > > I''m not seeing it quite like that. ZFS+mpxio works, ZFS+powerpath > seems to have "issues". > >> Also, I was using the powerpath psuedo device not the WWN though. > > I''ve not got that far. During an import, ZFS just pokes around - there> doesn''t seem to be an explicit way to tell it which particular devices> or SAN paths to use. > > -- > -Peter Tribble > http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Jul 13, 2007, at 1:15 PM, Alderman, Sean wrote:> I wonder what kind of card Peter''s using and if there is a potential > linkage there. We''ve got the Sun branded Emulux cards in our > sparcs. I > also wonder if Peter were able to allocate an additional LUN to his > system whether or not he''d be able to create a pool on that new LUN. > > I''m not sure why exactly they were chosen over the qlogic, some of our > admins swear by the qlogic cards, others have have had bad experiences > with the qlogic cards not allowing for persistent binding on some > configurations, but from my perspective being mostly a SAN noob > it''s all > hearsay. > > > -- > Sean M. Alderman > 513.204.2704 > > -----Original Message----- > From: Brian Wilson [mailto:bfwilson at doit.wisc.edu] > Sent: Friday, July 13, 2007 1:58 PM > To: Alderman, Sean > Cc: Peter Tribble; zfs-discuss at opensolaris.org > Subject: Re: [zfs-discuss] ZFS and powerpath > > Hmm. Odd. I''ve got PowerPath working fine with ZFS with both > Symmetrix > and Clariion back ends. > PowerPath Version is 4.5.0, running on leadville qlogic drivers. > Sparc hardware. (if it matters) > > I ran one our test databases on ZFS on the DMX via PowerPath for a > couple months until we switched off of it because of the ''bogus memory > usage'' statistics problem. We still use it on a server we use for > logs > processing and retention that uses the Clariion as a back end. > > cheers, > Brian > > > On Jul 13, 2007, at 11:08 AM, Alderman, Sean wrote: > >> There was a Sun Forums post that I referenced in that other thread >> that mentioned something about mpxio working but powerpath not >> working. Of course I don''t know how valid those statements are/were, >> and I don''t recall much detail given. >> >> >> -- >> Sean >> >> -----Original Message----- >> From: Peter Tribble [mailto:peter.tribble at gmail.com] >> Sent: Friday, July 13, 2007 11:53 AM >> To: Alderman, Sean >> Cc: zfs-discuss at opensolaris.org >> Subject: Re: [zfs-discuss] ZFS and powerpath >> >> On 7/13/07, Alderman, Sean <salderman at medplus.com> wrote: >>> You wouldn''t happen to be running this on a SPARC would you? >> >> That I would. >> >>> I started a thread last week regarding CLARiiON+ZFS+SPARC = core >>> dump > >>> when creating a zpool. I filed a bug report, though it doesn''t >>> appear >> >>> to be in the database (not sure if that means it was rejected or I >>> didn''t submit it correctly). >> >> I''m not seeing it quite like that. ZFS+mpxio works, ZFS+powerpath >> seems to have "issues". >> >>> Also, I was using the powerpath psuedo device not the WWN though. >> >> I''ve not got that far. During an import, ZFS just pokes around - >> there > >> doesn''t seem to be an explicit way to tell it which particular >> devices > >> or SAN paths to use. >>Hmmmmm. How many devices/LUNS can the server see? I don''t know how import finds the pools on the disk, but it sounds like it''s not happy somehow. Is there any possibility it''s seeing a Clariion mirror copy of the disks in the pool as well? Just a couple thoughts. Brian ------------------------------------------------------------------------ ----------- Brian Wilson, Sun SE, UW-Madison DoIT Room 3162 CS&S 608-263-8047 bfwilson(a)doit.wisc.edu ''I try to save a life a day. Usually it''s my own.'' - John Crichton ------------------------------------------------------------------------ ----------->> -- >> -Peter Tribble >> http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2451 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070713/4b42e231/attachment.bin>
On Jul 13, 2007, at 10:57 AM, Brian Wilson wrote:> Hmm. Odd. I''ve got PowerPath working fine with ZFS with both > Symmetrix and Clariion back ends. > PowerPath Version is 4.5.0, running on leadville qlogic drivers. > Sparc hardware. (if it matters) > > I ran one our test databases on ZFS on the DMX via PowerPath for a > couple months until we switched off of it because of the ''bogus > memory usage'' statistics problem. We still use it on a server we > use for logs processing and retention that uses the Clariion as a > back end. >hey Brian, Out of curiosity, what does ''bogus memory usage'' refer to? eric
On 7/13/07, Brian Wilson <bfwilson at doit.wisc.edu> wrote:> Hmmmmm. How many devices/LUNS can the server see? I don''t know how > import finds the pools on the disk, but it sounds like it''s not happy > somehow. Is there any possibility it''s seeing a Clariion mirror copy > of the disks in the pool as well?I don''t think it''s that. As far as I can tell it can see exactly the LUNs (2 of them - via 2 controllers and 2 HBAs, hence 8 native devices) it''s supposed to. And they appear to have exactly the right data on them. And it can even detect part of the pool on each LUN - it just fails to open one of them. Thanks, -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
On 7/13/07, Alderman, Sean <salderman at medplus.com> wrote:> I wonder what kind of card Peter''s using and if there is a potential > linkage there. We''ve got the Sun branded Emulux cards in our sparcs. I > also wonder if Peter were able to allocate an additional LUN to his > system whether or not he''d be able to create a pool on that new LUN.On a different continent and I didn''t buy it. Shows up as lpfc (is that Emulex?). I''m not sure that''s related - I can see the LUNs and devices, it''s just that zfs isn''t happy. (I still have this feeling that powerpath is doing something slightly differently that zfs doesn''t expect.) Thanks, -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
Peter Tribble wrote:> I''ve not got that far. During an import, ZFS just pokes around - there > doesn''t seem to be an explicit way to tell it which particular devices > or SAN paths to use.You can''t tell it which devices to use in a straightforward manner. But you can tell it which directories to scan. zpool import [-d dir] By default, it scans /dev/dsk. Does truss of zfs import show the powerrpath devices being opened and read from? Regards, Manoj
zfs-discuss-bounces at opensolaris.org wrote on 07/13/2007 02:21:52 PM:> Peter Tribble wrote: > > > I''ve not got that far. During an import, ZFS just pokes around - there > > doesn''t seem to be an explicit way to tell it which particular devices > > or SAN paths to use. > > You can''t tell it which devices to use in a straightforward manner. But > you can tell it which directories to scan. > > zpool import [-d dir] > > By default, it scans /dev/dsk. > > Does truss of zfs import show the powerrpath devices being opened and > read from?AFAIK powerpath does not really need to use the powerpath pseudo devices -- they are just there for convenience. I would expect the drives to be readable from either the c1xxxxxxxx devices or emc*.
Doesn''t that then create dependence on the cxtxdxsx device name to be available? /dev/dsk/c2t500601601020813Ed0s0 = path1 /dev/dsk/c2t500601681020813Ed0s0 = path2 /dev/dsk/emcpower0a = pseudo device pointing to both paths. So if you''ve got a zpool on /dev/dsk/c2t500601601020813Ed0s0 and that path becomes unavailable (perhaps due to device renumber or failure), won''t the zpool be unavailable? Whereas a zpool on /dev/dsk/emcpower0a will automagically handle the situation (assuming the 2nd path is available)? My thought would be that perhaps zpool import is confused seeing the same zpool information on all three devices, but I concede this is a relatively uneducated guess. -- Sean -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Wade.Stuart at fallon.com Sent: Friday, July 13, 2007 4:11 PM To: Manoj Joseph Cc: zfs-discuss-bounces at opensolaris.org; zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] ZFS and powerpath zfs-discuss-bounces at opensolaris.org wrote on 07/13/2007 02:21:52 PM:> Peter Tribble wrote: > > > I''ve not got that far. During an import, ZFS just pokes around - > > there doesn''t seem to be an explicit way to tell it which particular> > devices or SAN paths to use. > > You can''t tell it which devices to use in a straightforward manner. > But you can tell it which directories to scan. > > zpool import [-d dir] > > By default, it scans /dev/dsk. > > Does truss of zfs import show the powerrpath devices being opened and > read from?AFAIK powerpath does not really need to use the powerpath pseudo devices -- they are just there for convenience. I would expect the drives to be readable from either the c1xxxxxxxx devices or emc*. _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Peter Tribble wrote:> On 7/13/07, Alderman, Sean <salderman at medplus.com> wrote: > >> I wonder what kind of card Peter''s using and if there is a potential >> linkage there. We''ve got the Sun branded Emulux cards in our sparcs. I >> also wonder if Peter were able to allocate an additional LUN to his >> system whether or not he''d be able to create a pool on that new LUN. >> > > On a different continent and I didn''t buy it. Shows up as lpfc (is > that Emulex?). I''m not sure that''s related - I can see the LUNs > and devices, it''s just that zfs isn''t happy.Those, lpfc, are native Emulex drivers.
Wade.Stuart at fallon.com wrote:> > > > zfs-discuss-bounces at opensolaris.org wrote on 07/13/2007 02:21:52 PM: > > >> Peter Tribble wrote: >> >> >>> I''ve not got that far. During an import, ZFS just pokes around - there >>> doesn''t seem to be an explicit way to tell it which particular devices >>> or SAN paths to use. >>> >> You can''t tell it which devices to use in a straightforward manner. But >> you can tell it which directories to scan. >> >> zpool import [-d dir] >> >> By default, it scans /dev/dsk. >> >> Does truss of zfs import show the powerrpath devices being opened and >> read from? >> > > > AFAIK powerpath does not really need to use the powerpath pseudo devices -- > they are just there for convenience. I would expect the drives to be > readable from either the c1xxxxxxxx devices or emc*.ZFS needs to use the top level multipath device or bad things will probably happen in a failover or in initial zpool creation. Fopr example: You''ll try to use the device on two paths and cause a lun failover to occur. Mpxio fixes a lot of these issues. I strongly suggest using mpxio instead of powerpath but sometimes its all you can use if the array is new and mpxio doesn''t have the hooks for it ... yet.
> Doesn''t that then create dependence on the cxtxdxsx device name to be > available? > > /dev/dsk/c2t500601601020813Ed0s0 = path1 > /dev/dsk/c2t500601681020813Ed0s0 = path2 > /dev/dsk/emcpower0a = pseudo device pointing to both paths. > > So if you''ve got a zpool on /dev/dsk/c2t500601601020813Ed0s0 and that > path becomes unavailable (perhaps due to device renumber or failure), > won''t the zpool be unavailable? Whereas a zpool on /dev/dsk/emcpower0a > will automagically handle the situation (assuming the 2nd path is > available)?How would the path become unavailable? While it looks like a raw path, it''s still being managed by powerpath. So during a boot, it should not suddenly disappear, even if the path goes away. If it were to go away between boots, then I see it as the same situation for a disk being renumbered/repathed on "normal" single-pathed storage. ZFS needs to be able to scan all paths until it finds the disk again. The pool should not be tied to the path. -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. >
Peter Tribble wrote:> # powermt display dev=all > Pseudo name=emcpower0a > CLARiiON ID=APM00043600837 [########] > Logical device ID=600601600C4912003AB4B247BA2BDA11 [LUN 46] > state=alive; policy=CLAROpt; priority=0; queued-IOs=0 > Owner: default=SP B, current=SP B > =============================================================================> ---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- > ### HW Path I/O Paths Interf. Mode State Q-IOs Errors > =============================================================================> 3073 pci at 1c,600000/lpfc at 1/fp at 0,0 c2t500601613060099Cd1s0 SP A1 > active alive 0 0 > 3073 pci at 1c,600000/lpfc at 1/fp at 0,0 c2t500601693060099Cd1s0 SP B1 > active alive 0 0 > 3072 pci at 1d,700000/lpfc at 1/fp at 0,0 c3t500601603060099Cd1s0 SP A0 > active alive 0 0 > 3072 pci at 1d,700000/lpfc at 1/fp at 0,0 c3t500601683060099Cd1s0 SP B0 > active alive 0 0 > >If it helps at all. We''re having a similar problem. Any LUN''s configured with their default owner to be SP B, don''t get along with ZFS. We''re running on a T2000, With Emulex cards and the ssd driver. MPXIO seems to work well for most cases, but the SAN guys are not comfortable with it. -Andy
> Shows up as lpfc (is that Emulex?)lpfc (or fibre-channel) is an Emulex branded emulex card device - sun branded emulex uses the emlxs driver. I run zfs (v2 and v3) on Emulex and Sun Branded emulex on SPARC with Powerpath 4.5.0(and MPxIOin other cases) and Clariion arrays and have never seen this problem. In fact I''m trying to get rid of my PowerPath instances and standardizing on MPxIO and when I''ve destroyed PP devices that lead to a zpool and rediscover the devices the pools show up healthy with the new MPxIO names. This is all using Update 3 with 118833-36. As an HBA note, I have a pair of Emulex LP9802s (lpfc devices) with proper firmware for the CX-600 on a v890 using zpools and a ridiculous number of device errors (esp Page 83 errors). Other systems using Sun Branded Emulex Cards (SG-XPCI1FC-EM2) don''t show these errors and I''m swapping the cards later this month to get rid of the errors. This message posted from opensolaris.org
On 7/15/07, JS <jeff.sutch at acm.org> wrote:> > I run zfs (v2 and v3) on Emulex and Sun Branded emulex on SPARC with Powerpath 4.5.0(and MPxIOin other cases) and Clariion arrays and have never seen this problem. In fact I''m trying to get rid of my PowerPath instances and standardizing on MPxIO and when I''ve destroyed PP devices that lead to a zpool and rediscover the devices the pools show up healthy with the new MPxIO names. This is all using Update 3 with 118833-36.That''s what I would have expected to happen. We''re going the other way but all I thought was going to happen was that the paths would change but everything else would be fine. Unfortunately not :-( -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
On 7/13/07, Torrey McMahon <tmcmahon2 at yahoo.com> wrote:> ZFS needs to use the top level multipath device or bad things will > probably happen in a failover or in initial zpool creation. Fopr > example: You''ll try to use the device on two paths and cause a lun > failover to occur. > > Mpxio fixes a lot of these issues. I strongly suggest using mpxio > instead of powerpath but sometimes its all you can use if the array is > new and mpxio doesn''t have the hooks for it ... yet.Hm. This is pretty old stuff, and what is irritating is that I had it all working under mpxio. Then I was told the system had to be reconfigured to use powerpath, and I''ve not seen my data since. (I follow the logic that it''s the datacenter standard, although I''m no longer sure I agree with it based on my experience so far; nor does my own experience match the alleged technical superiority of powerpath over mpxio. Ho hum.) -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
Carisdad wrote:> Peter Tribble wrote: > >> # powermt display dev=all >> Pseudo name=emcpower0a >> CLARiiON ID=APM00043600837 [########] >> Logical device ID=600601600C4912003AB4B247BA2BDA11 [LUN 46] >> state=alive; policy=CLAROpt; priority=0; queued-IOs=0 >> Owner: default=SP B, current=SP B >> =============================================================================>> ---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- >> ### HW Path I/O Paths Interf. Mode State Q-IOs Errors >> =============================================================================>> 3073 pci at 1c,600000/lpfc at 1/fp at 0,0 c2t500601613060099Cd1s0 SP A1 >> active alive 0 0 >> 3073 pci at 1c,600000/lpfc at 1/fp at 0,0 c2t500601693060099Cd1s0 SP B1 >> active alive 0 0 >> 3072 pci at 1d,700000/lpfc at 1/fp at 0,0 c3t500601603060099Cd1s0 SP A0 >> active alive 0 0 >> 3072 pci at 1d,700000/lpfc at 1/fp at 0,0 c3t500601683060099Cd1s0 SP B0 >> active alive 0 0 >> >> >> > If it helps at all. We''re having a similar problem. Any LUN''s > configured with their default owner to be SP B, don''t get along with > ZFS. We''re running on a T2000, With Emulex cards and the ssd driver. > MPXIO seems to work well for most cases, but the SAN guys are not > comfortable with it.Are you using the top level powerpath device? Is the clariion in an auto-trespass mode where any i/o going down the alt path will cause the LUNs to move?
> > If it helps at all. We''re having a similar problem. Any LUN''s > > configured with their default owner to be SP B, don''t get along with > > ZFS. We''re running on a T2000, With Emulex cards and the ssd driver. > > MPXIO seems to work well for most cases, but the SAN guys are not > > comfortable with it. > > Are you using the top level powerpath device? Is the clariion in an > auto-trespass mode where any i/o going down the alt path will cause the > LUNs to move?My previous experience with powerpath was that it rode below the Solaris device layer. So you couldn''t cause trespass by using the "wrong" device. It would just go to powerpath which would choose the link to use on its own. Is this not true or has it changed over time? -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. >
Darren Dunham wrote:>>> If it helps at all. We''re having a similar problem. Any LUN''s >>> configured with their default owner to be SP B, don''t get along with >>> ZFS. We''re running on a T2000, With Emulex cards and the ssd driver. >>> MPXIO seems to work well for most cases, but the SAN guys are not >>> comfortable with it. >>> >> Are you using the top level powerpath device? Is the clariion in an >> auto-trespass mode where any i/o going down the alt path will cause the >> LUNs to move? >> > > My previous experience with powerpath was that it rode below the Solaris > device layer. So you couldn''t cause trespass by using the "wrong" > device. It would just go to powerpath which would choose the link to > use on its own. > > Is this not true or has it changed over time? >I haven''t looked at power path for some time but it used to be the opposite. The powerpath node sat on top of the actual device paths. One of the selling points of mpxio is that it doesn''t have that problem. (At least for devices it supports.) Most of the multipath software had that same limitation However, I''m not an expert on powerpath by any stretch of the imagination. I just took a quick look at the powerpath manual (4.0 version.) and it says you can now use both types which seems a little confusing. Again, I''s be interested to see if using the pseudo-device works better.....not to mention how it works using the direct path disk entry.
On Jul 16, 2007, at 6:06 PM, Torrey McMahon wrote:> Darren Dunham wrote: >>>> If it helps at all. We''re having a similar problem. Any LUN''s >>>> configured with their default owner to be SP B, don''t get along >>>> with >>>> ZFS. We''re running on a T2000, With Emulex cards and the ssd >>>> driver. >>>> MPXIO seems to work well for most cases, but the SAN guys are not >>>> comfortable with it. >>>> >>> Are you using the top level powerpath device? Is the clariion in an >>> auto-trespass mode where any i/o going down the alt path will >>> cause the >>> LUNs to move? >>> >> >> My previous experience with powerpath was that it rode below the >> Solaris >> device layer. So you couldn''t cause trespass by using the "wrong" >> device. It would just go to powerpath which would choose the link to >> use on its own. >> >> Is this not true or has it changed over time? >>> I haven''t looked at power path for some time but it used to be the > opposite. The powerpath node sat on top of the actual device paths. > One > of the selling points of mpxio is that it doesn''t have that > problem. (At > least for devices it supports.) Most of the multipath software had > that > same limitation >I agree, it''s not true. I don''t know how long it hasn''t been true, but the last year and a half I''ve been implementing PowerPath on Solaris 8, 9, 10, the way to make it work is to point whatever disk tool you''re using to the emcpower device. The other paths are there because leadville finds them and creates them (if you''re using leadville), but PowerPath isn''t doing anything to make them redundant, it''s giving you the emcpower device and the emcp, etc. drivers to front end them and give you a multipathed device (the emcpower device). It DOES choose which one to use, for all I/O going through the emcpower device. In a situation where you lose paths and I/O is moving, you''ll see scsi errors down one path, then the next, then the next, as PowerPath gets fed the scsi error and tries the next device path. If you use those actual device paths, you''re not actually getting a device that PowerPath is multipathing for you (i.e. it does not dig in beneath the scsi driver) I haven''t had any problem making Veritas, Disksuite, or in a very few cases so far ZFS work by pointing them at the emcpower devices. (note that ''not having any problem'' included reading the PowerPath manuals and docs before implementing it to make sure it''s being done ''right'' according to EMC''s procedures, not just Sun''s or Veritas'') I haven''t disected cheers, Brian> However, I''m not an expert on powerpath by any stretch of the > imagination. I just took a quick look at the powerpath manual (4.0 > version.) and it says you can now use both types which seems a little > confusing. Again, I''s be interested to see if using the pseudo-device > works better.....not to mention how it works using the direct path > disk > entry. >-------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2451 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070717/0ab83bc0/attachment.bin>
On Jul 15, 2007, at 12:59 PM, Peter Tribble wrote:> On 7/13/07, Torrey McMahon <tmcmahon2 at yahoo.com> wrote: >> ZFS needs to use the top level multipath device or bad things will >> probably happen in a failover or in initial zpool creation. Fopr >> example: You''ll try to use the device on two paths and cause a lun >> failover to occur. >> >> Mpxio fixes a lot of these issues. I strongly suggest using mpxio >> instead of powerpath but sometimes its all you can use if the >> array is >> new and mpxio doesn''t have the hooks for it ... yet. > > Hm. This is pretty old stuff, and what is irritating is that I had > it all > working under mpxio. Then I was told the system had to be reconfigured > to use powerpath, and I''ve not seen my data since. > > (I follow the logic that it''s the datacenter standard, although I''m no > longer sure I agree with it based on my experience so far; nor does > my own experience match the alleged technical superiority of > powerpath over mpxio. Ho hum.)I would definitely offer to work with your SAN team on getting MPxIO "certified" with your arrays. Where I work, we use Veritas DMP with our Hitachi arrays. When we go to ZFS, we will, of course, go to MPxIO instead of Hitachi''s HDLM. The only difference from this thread is that we use Sun-branded Qlogic HBAs instead of Emulex on both our SPARC and x64 Sun servers. It''s been slow going for me, but I''ve had great success working with my SAN team (who in turn work with HDS), on getting new technologies* working in our environment. -john *Our SAN team runs things very conservatively. They consider "new" technologies to be things introduced one to two years ago.
There is an open issue/bug with ZFS and EMC PowerPath for Solaris 10 in x86/x64 space. My customer encountered the issue back in April 2007 and is awaiting the patch. We''re expecting an update (hopefully a patch) by the end of July 2007. As I recall, it did involve CX arrays and "trespass" functionality. This message posted from opensolaris.org
Brian Wilson wrote:> On Jul 16, 2007, at 6:06 PM, Torrey McMahon wrote: > > Darren Dunham wrote: > >> My previous experience with powerpath was that it rode below the > >> Solaris > >> device layer. So you couldn''t cause trespass by using the "wrong" > >> device. It would just go to powerpath which would choose the linkto> >> use on its own. > >> > >> Is this not true or has it changed over time? > > I haven''t looked at power path for some time but it used to be the > > opposite. The powerpath node sat on top of the actual device paths.> > One of the selling points of mpxio is that it doesn''t have that > > problem. (At least for devices it supports.) Most of the multipathsoftware had> > that same limitation > > > > I agree, it''s not true. I don''t know how long it hasn''t been true, > but the last year and a half I''ve been implementing PowerPath on > Solaris 8, 9, 10, the way to make it work is to point whatever disk > tool you''re using to the emcpower device. The other paths are there > because leadville finds them and creates them (if you''re using > leadville), but PowerPath isn''t doing anything to make them > redundant, it''s giving you the emcpower device and the emcp, etc. > drivers to front end them and give you a multipathed device (the > emcpower device). It DOES choose which one to use, for all > I/O going > through the emcpower device. In a situation where you lose > paths and > I/O is moving, you''ll see scsi errors down one path, then the next, > then the next, as PowerPath gets fed the scsi error and tries the > next device path. If you use those actual device paths, you''re not > actually getting a device that PowerPath is multipathing for you > (i.e. it does not dig in beneath the scsi driver)I''m afraid I have to disagree with you: I''m using the /dev/dsk/c2t$WWNdXs2 devices quite happily with powerpath handling failover for my clariion. # powermt version EMC powermt for PowerPath (c) Version 4.4.0 (build 274) # powermt display dev=58 Pseudo name=emcpower58a CLARiiON ID=APM00051704678 [uscicsap1] Logical device ID=6006016067E51400565259A15331DB11 [saperqdb1: /oracle/Q02/saparch] state=alive; policy=BasicFailover; priority=0; queued-IOs=0 Owner: default=SP A, current=SP A ============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================3073 pci at 1c/SUNW,qlc at 1 c2t5006016130202E48d58s0 SP A1 active alive 0 0 3073 pci at 1c/SUNW,qlc at 1 c2t5006016930202E48d58s0 SP B1 active alive 0 0 # fsck /dev/dsk/c2t5006016130202E48d58s0 ** /dev/dsk/c2t5006016130202E48d58s0 ** Last Mounted on /zones/saperqdb1/root/oracle/Q02/saparch ** Phase 1 - Check Blocks and Sizes ** Phase 2 - Check Pathnames ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cyl groups FILE SYSTEM STATE IN SUPERBLOCK IS WRONG; FIX? n 144 files, 189504 used, 33832172 free (420 frags, 4228969 blocks, 0.0% fragmentation) # fsck /dev/dsk/c2t5006016930202E48d58s0 ** /dev/dsk/c2t5006016930202E48d58s0 ** Last Mounted on /zones/saperqdb1/root/oracle/Q02/saparch ** Phase 1 - Check Blocks and Sizes ** Phase 2 - Check Pathnames ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cyl groups FILE SYSTEM STATE IN SUPERBLOCK IS WRONG; FIX? n 144 files, 189504 used, 33832172 free (420 frags, 4228969 blocks, 0.0% fragmentation) ### So at this point, I can look down either path and get to my data. Now I kill 1 of the 2 paths via SAN zoning. cfgadm -c configure c2, and powermt check reports that the path to SP A is now dead. I''m still able to fsck the dead path: # cfgadm -c configure c2 # powermt check Warning: CLARiiON device path c2t5006016130202E48d58s0 is currently dead. Do you want to remove it (y/n/a/q)? n # powermt display dev=58 Pseudo name=emcpower58a CLARiiON ID=APM00051704678 [uscicsap1] Logical device ID=6006016067E51400565259A15331DB11 [saperqdb1: /oracle/Q02/saparch] state=alive; policy=BasicFailover; priority=0; queued-IOs=0 Owner: default=SP A, current=SP B ============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================3073 pci at 1c/SUNW,qlc at 1 c2t5006016130202E48d58s0 SP A1 active dead 0 1 3073 pci at 1c/SUNW,qlc at 1 c2t5006016930202E48d58s0 SP B1 active alive 0 0 # fsck /dev/dsk/c2t5006016130202E48d58s0 ** /dev/dsk/c2t5006016130202E48d58s0 ** Last Mounted on /zones/saperqdb1/root/oracle/Q02/saparch ** Phase 1 - Check Blocks and Sizes ** Phase 2 - Check Pathnames ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cyl groups FILE SYSTEM STATE IN SUPERBLOCK IS WRONG; FIX? n 144 files, 189504 used, 33832172 free (420 frags, 4228969 blocks, 0.0% fragmentation) # fsck /dev/dsk/c2t5006016930202E48d58s0 ** /dev/dsk/c2t5006016930202E48d58s0 ** Last Mounted on /zones/saperqdb1/root/oracle/Q02/saparch ** Phase 1 - Check Blocks and Sizes ** Phase 2 - Check Pathnames ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cyl groups FILE SYSTEM STATE IN SUPERBLOCK IS WRONG; FIX? n 144 files, 189504 used, 33832172 free (420 frags, 4228969 blocks, 0.0% fragmentation)