I had a system that I was testing zfs on using EMC Luns to create a striped zpool without using the multi-pathing software PowerPath. Of coarse a storage emergency came up so I lent this storage out for temp storage and we''re still using. I''d like to add PowerPath to take advanage of the multi-pathing in case I lose and SFP (or entire switch for that matter) but I''m not exactly sure what I can do. So my zpool currently looks like: ###################################################### # zpool status -v pool: myzfs state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM myzfs ONLINE 0 0 0 1234567890 ONLINE 0 0 0 1234567891 ONLINE 0 0 0 ####################################################### So how would I change the path after I install PowerPath to use the multi-path? So 1234567890 would be equal to /dev/dsk/emcpower1 and 1234567891 would be equal to /dev/dsk/emcpower2. In the end it would look like: ###################################################### # zpool status -v pool: myzfs state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM myzfs ONLINE 0 0 0 emcpower1 ONLINE 0 0 0 emcpower1 ONLINE 0 0 0 ####################################################### I would image (because I haven''t tried it yet) that it would require using zfs export/import in order to make this happen. Has anyone tried this? Am I fubar? Thanks for the help! Great forum btw... -- This message posted from opensolaris.org
Hi Mike, In theory, this should work, but I don''t have an experience with this particular software, maybe someone else does. One way to determine if it might work is by using use the zdb -l command on each device in the pool and check for a populated devid= string. If the devid exists, then ZFS should be able to handle the device change. On a ctd device this syntax looks like this, for example: # zdb -l /dev/dsk/c1t1d0s0 I would also recommend exporting the pool first, but I again I have no experience with this software. The whole topic of moving or changing devices under live pools, particularly on non-Sun gear, makes me queasy as does the fact that my teenage son will be driving soon so maybe my queasiness is related to my general fear of the unknown. Cindy On 12/08/09 10:15, Mike wrote:> I had a system that I was testing zfs on using EMC Luns to create a striped zpool without using the multi-pathing software PowerPath. Of coarse a storage emergency came up so I lent this storage out for temp storage and we''re still using. I''d like to add PowerPath to take advanage of the multi-pathing in case I lose and SFP (or entire switch for that matter) but I''m not exactly sure what I can do. > > So my zpool currently looks like: > > ###################################################### > # zpool status -v > pool: myzfs > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > myzfs ONLINE 0 0 0 > 1234567890 ONLINE 0 0 0 > 1234567891 ONLINE 0 0 0 > ####################################################### > > So how would I change the path after I install PowerPath to use the multi-path? So 1234567890 would be equal to /dev/dsk/emcpower1 and 1234567891 would be equal to /dev/dsk/emcpower2. > > In the end it would look like: > > ###################################################### > # zpool status -v > pool: myzfs > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > myzfs ONLINE 0 0 0 > emcpower1 ONLINE 0 0 0 > emcpower1 ONLINE 0 0 0 > ####################################################### > > I would image (because I haven''t tried it yet) that it would require using zfs export/import in order to make this happen. Has anyone tried this? Am I fubar? Thanks for the help! Great forum btw...
But don''t forget that "The unknown is what makes life interesting" :) Bruno Cindy Swearingen wrote:> Hi Mike, > > In theory, this should work, but I don''t have an experience with this > particular software, maybe someone else does. > > One way to determine if it might work is by using use the zdb -l command > on each device in the pool and check for a populated devid= string. If > the devid exists, then ZFS should be able to handle the device change. > On a ctd device this syntax looks like this, for example: > > # zdb -l /dev/dsk/c1t1d0s0 > > I would also recommend exporting the pool first, but I again I have no > experience with this software. > > The whole topic of moving or changing devices under live pools, > particularly on non-Sun gear, makes me queasy as does the fact that my > teenage son will be driving soon so maybe my queasiness is related to > my general fear of the unknown. > > Cindy > > > On 12/08/09 10:15, Mike wrote: >> I had a system that I was testing zfs on using EMC Luns to create a >> striped zpool without using the multi-pathing software PowerPath. Of >> coarse a storage emergency came up so I lent this storage out for >> temp storage and we''re still using. I''d like to add PowerPath to >> take advanage of the multi-pathing in case I lose and SFP (or entire >> switch for that matter) but I''m not exactly sure what I can do. >> >> So my zpool currently looks like: >> >> ###################################################### >> # zpool status -v >> pool: myzfs >> state: ONLINE >> scrub: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> myzfs ONLINE 0 0 0 >> 1234567890 ONLINE 0 0 0 >> 1234567891 ONLINE 0 0 0 >> ####################################################### >> >> So how would I change the path after I install PowerPath to use the >> multi-path? So 1234567890 would be equal to /dev/dsk/emcpower1 and >> 1234567891 would be equal to /dev/dsk/emcpower2. >> >> In the end it would look like: >> >> ###################################################### >> # zpool status -v >> pool: myzfs >> state: ONLINE >> scrub: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> myzfs ONLINE 0 0 0 >> emcpower1 ONLINE 0 0 0 >> emcpower1 ONLINE 0 0 0 >> ####################################################### >> >> I would image (because I haven''t tried it yet) that it would require >> using zfs export/import in order to make this happen. Has anyone >> tried this? Am I fubar? Thanks for the help! Great forum btw... > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3656 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091208/5835897e/attachment.bin>
Thanks Cindys for your input... I love your fear example too, but lucky for me I have 10 years before I have to worry about that and hopefully we''ll all be in hovering bumper cars by then. It looks like I''m going to have to create another test system and try recommondations give here...and hope that another emergency doesn''t arrise... :) -- This message posted from opensolaris.org
On Tue, Dec 8, 2009 at 1:37 PM, Mike <mijohnst at gmail.com> wrote:> Thanks Cindys for your input... I love your fear example too, but lucky > for me I have 10 years before I have to worry about that and hopefully we''ll > all be in hovering bumper cars by then. > > It looks like I''m going to have to create another test system and try > recommondations give here...and hope that another emergency doesn''t > arrise... :) > > >I''m not going to tell you not to test it, but you should be just fine. ZFS is writing a signature to the beginning of the disk, it doesn''t care where the physical location of the disk is. It''s no different than setting up ASM and moving from direct device paths to a multipathing situation. I''ve never sen powerpath touch the actual contents of the LUN, it merely manages paths. -- --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091208/51293b7e/attachment.html>
Hi Mike, while I have not done it with EMC and Powerpath, I have done a similar thing with mpxio and even grabing the disks and throwing in the in a totally different machine. The steps are: 1) export the pool 2) Do your powerpath stuff and get the new devices seen by solaris. 3) Import the pool. zfs seaches all attached devices @ import to locate all the bits of the pool. Regards Rodney Bruno Sousa wrote:> But don''t forget that "The unknown is what makes life interesting" :) > > Bruno > > Cindy Swearingen wrote: > >> Hi Mike, >> >> In theory, this should work, but I don''t have an experience with this >> particular software, maybe someone else does. >> >> One way to determine if it might work is by using use the zdb -l command >> on each device in the pool and check for a populated devid= string. If >> the devid exists, then ZFS should be able to handle the device change. >> On a ctd device this syntax looks like this, for example: >> >> # zdb -l /dev/dsk/c1t1d0s0 >> >> I would also recommend exporting the pool first, but I again I have no >> experience with this software. >> >> The whole topic of moving or changing devices under live pools, >> particularly on non-Sun gear, makes me queasy as does the fact that my >> teenage son will be driving soon so maybe my queasiness is related to >> my general fear of the unknown. >> >> Cindy >> >> >> On 12/08/09 10:15, Mike wrote: >> >>> I had a system that I was testing zfs on using EMC Luns to create a >>> striped zpool without using the multi-pathing software PowerPath. Of >>> coarse a storage emergency came up so I lent this storage out for >>> temp storage and we''re still using. I''d like to add PowerPath to >>> take advanage of the multi-pathing in case I lose and SFP (or entire >>> switch for that matter) but I''m not exactly sure what I can do. >>> >>> So my zpool currently looks like: >>> >>> ###################################################### >>> # zpool status -v >>> pool: myzfs >>> state: ONLINE >>> scrub: none requested >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> myzfs ONLINE 0 0 0 >>> 1234567890 ONLINE 0 0 0 >>> 1234567891 ONLINE 0 0 0 >>> ####################################################### >>> >>> So how would I change the path after I install PowerPath to use the >>> multi-path? So 1234567890 would be equal to /dev/dsk/emcpower1 and >>> 1234567891 would be equal to /dev/dsk/emcpower2. >>> >>> In the end it would look like: >>> >>> ###################################################### >>> # zpool status -v >>> pool: myzfs >>> state: ONLINE >>> scrub: none requested >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> myzfs ONLINE 0 0 0 >>> emcpower1 ONLINE 0 0 0 >>> emcpower1 ONLINE 0 0 0 >>> ####################################################### >>> >>> I would image (because I haven''t tried it yet) that it would require >>> using zfs export/import in order to make this happen. Has anyone >>> tried this? Am I fubar? Thanks for the help! Great forum btw... >>> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- ============================================Rodney Lindner Services Chief Technologist Sun Microsystems Australia Phone: +61 (0)2 94669674 (EXTN:59674) Mobile +61 (0)404 815 842 Email: rodney.lindner at sun.com ============================================ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091209/765c916e/attachment.html>
On Tue, 2009-12-08 at 09:15 -0800, Mike wrote:> I had a system that I was testing zfs on using EMC Luns to create a striped zpool without using the multi-pathing software PowerPath. Of coarse a storage emergency came up so I lent this storage out for temp storage and we''re still using. I''d like to add PowerPath to take advanage of the multi-pathing in case I lose and SFP (or entire switch for that matter) but I''m not exactly sure what I can do. > > So my zpool currently looks like: >...> > I would image (because I haven''t tried it yet) that it would require using zfs export/import in order to make this happen. Has anyone tried this? Am I fubar? Thanks for the help! Great forum btw...When I''ve done this in the past its been a pool export, reconfigure storage, pool import procedure. In my dark PowerPath days I recall PowerPath couldn''t handle the using the emcpower# disk name. You had to format the emcpower device and create you''re zpool on the "emcpower1a" slice. This may have changed in the newer versions of PowerPath (the last version I ran was 5.0.0_b141). I''ve since switched to using Sun MPxIO for multipathing. Its worked fine so far its certain to support you''re ZFS config. Just export you''re pool, run the "stmsboot -e" commmand, reboot, and re-import you''re pool. -Alex
Thanks for the info Alexander... I will test this out. I''m just wondering what it''s going to see after I install Power Path. Since each drive will have 4 paths, plus the Power Path... after doing a "zfs import" how will I force it to use a specific path? Thanks again! Good to know that this can be done. On Wed, Dec 9, 2009 at 5:16 AM, Alexander J. Maidak <ajmaidak at mchsi.com>wrote:> On Tue, 2009-12-08 at 09:15 -0800, Mike wrote: > > I had a system that I was testing zfs on using EMC Luns to create a > striped zpool without using the multi-pathing software PowerPath. Of coarse > a storage emergency came up so I lent this storage out for temp storage and > we''re still using. I''d like to add PowerPath to take advanage of the > multi-pathing in case I lose and SFP (or entire switch for that matter) but > I''m not exactly sure what I can do. > > > > So my zpool currently looks like: > > > ... > > > > I would image (because I haven''t tried it yet) that it would require > using zfs export/import in order to make this happen. Has anyone tried > this? Am I fubar? Thanks for the help! Great forum btw... > > When I''ve done this in the past its been a pool export, reconfigure > storage, pool import procedure. > > In my dark PowerPath days I recall PowerPath couldn''t handle the using > the emcpower# disk name. You had to format the emcpower device and > create you''re zpool on the "emcpower1a" slice. This may have changed in > the newer versions of PowerPath (the last version I ran was 5.0.0_b141). > > I''ve since switched to using Sun MPxIO for multipathing. Its worked > fine so far its certain to support you''re ZFS config. Just export > you''re pool, run the "stmsboot -e" commmand, reboot, and re-import > you''re pool. > > -Alex > > > > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091209/7a92c7da/attachment.html>
Alex, thanks for the info. You made my heart stop a little when reading your problem with PowerPath, but MPxIO seems like it might be a good option for me. I''ll will try that as well although I have not used it before. Thank you! -- This message posted from opensolaris.org
On Wed, Dec 9, 2009 at 3:22 PM, Mike Johnston <mijohnst at gmail.com> wrote:> Thanks for the info Alexander... I will test this out.? I''m just wondering > what it''s going to see after I install Power Path.? Since each drive will > have 4 paths, plus the Power Path...? after doing a "zfs import" how will I > force it to use a specific path?? Thanks again!? Good to know that this can > be done.I had in the last weeks a similar problem. I have on my testbed server (Solaris 10.x Update4) PowerPath 5.2 that it''s connected on two FC switch and then to Clariion CX3. Each LUN on the Clariion create 4 path to the host. I created 8 LUN, reconfigured Solaris for make them visible to the host and then tried to create a ZFS pool. I encountered a problem when I run the command: -- # root at solaris10# zpool status pool: tank state: ONLINE scrub: scrub completed with 0 errors on Mon Dec 14 05:00:01 2009 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror ONLINE 0 0 0 emcpower7a ONLINE 0 0 0 emcpower5a ONLINE 0 0 0 mirror ONLINE 0 0 0 emcpower8a ONLINE 0 0 0 emcpower6a ONLINE 0 0 0 errors: No known data errors root at solaris10# zpool history History for ''tank'': 2009-12-10.20:19:17 zpool create -f tank mirror emcpower7a emcpower5a 2009-12-11.05:00:01 zpool scrub tank 2009-12-11.14:28:33 zpool add tank mirror emcpower8a emcpower6a 2009-12-14.05:00:01 zpool scrub tank root at solaris10# zpool add tank mirror emcpower3a emcpower1a internal error: Invalid argument Abort (core dumped) root at solaris# -- Next task will be to upgrade PowerPath (from 5.2 to 5.2 SP 2) and then retry the command to see if the problem (internal error) will be disappear. Anybody did have a similar problem? Cesare -- Marie von Ebner-Eschenbach - "Even a stopped clock is right twice a day." - http://www.brainyquote.com/quotes/authors/m/marie_von_ebnereschenbac.html
Hi Cesare, According to our CR 6524163, this problem was fixed in PowerPath 5.0.2, but then the problem reoccurred. According to the EMC PowerPath Release notes, here: www.emc.com/microsites/clariion-support/pdf/300-006-626.pdf This problem is fixed in 5.2 SP1. I would review the related ZFS information in this doc before proceeding. Thanks, Cindy On 12/14/09 03:53, Cesare wrote:> On Wed, Dec 9, 2009 at 3:22 PM, Mike Johnston <mijohnst at gmail.com> wrote: >> Thanks for the info Alexander... I will test this out. I''m just wondering >> what it''s going to see after I install Power Path. Since each drive will >> have 4 paths, plus the Power Path... after doing a "zfs import" how will I >> force it to use a specific path? Thanks again! Good to know that this can >> be done. > > I had in the last weeks a similar problem. I have on my testbed server > (Solaris 10.x Update4) PowerPath 5.2 that it''s connected on two FC > switch and then to Clariion CX3. > > Each LUN on the Clariion create 4 path to the host. I created 8 LUN, > reconfigured Solaris for make them visible to the host and then tried > to create a ZFS pool. I encountered a problem when I run the command: > > -- > # root at solaris10# zpool status > pool: tank > state: ONLINE > scrub: scrub completed with 0 errors on Mon Dec 14 05:00:01 2009 > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > mirror ONLINE 0 0 0 > emcpower7a ONLINE 0 0 0 > emcpower5a ONLINE 0 0 0 > mirror ONLINE 0 0 0 > emcpower8a ONLINE 0 0 0 > emcpower6a ONLINE 0 0 0 > > errors: No known data errors > root at solaris10# zpool history > History for ''tank'': > 2009-12-10.20:19:17 zpool create -f tank mirror emcpower7a emcpower5a > 2009-12-11.05:00:01 zpool scrub tank > 2009-12-11.14:28:33 zpool add tank mirror emcpower8a emcpower6a > 2009-12-14.05:00:01 zpool scrub tank > > root at solaris10# zpool add tank mirror emcpower3a emcpower1a > internal error: Invalid argument > Abort (core dumped) > root at solaris# > -- > > Next task will be to upgrade PowerPath (from 5.2 to 5.2 SP 2) and then > retry the command to see if the problem (internal error) will be > disappear. Anybody did have a similar problem? > > Cesare
Hy Cindy, I downloaded that document and I''ll follow istruction before update the host. I just tried the procedure on a different host (but did not have the problem I wrote) and it worked. I''ll follow news after upgrade the host where the problem occur. Cesare On Mon, Dec 14, 2009 at 9:12 PM, Cindy Swearingen <Cindy.Swearingen at sun.com> wrote:> Hi Cesare, > > According to our CR 6524163, this problem was fixed in PowerPath 5.0.2, but > then the problem reoccurred. > > According to the EMC PowerPath Release notes, here: > > www.emc.com/microsites/clariion-support/pdf/300-006-626.pdf > > This problem is fixed in 5.2 SP1. > > I would review the related ZFS information in this doc before proceeding. > > Thanks, > > Cindy > > On 12/14/09 03:53, Cesare wrote: >> >> On Wed, Dec 9, 2009 at 3:22 PM, Mike Johnston <mijohnst at gmail.com> wrote: >>> >>> Thanks for the info Alexander... I will test this out. ?I''m just >>> wondering >>> what it''s going to see after I install Power Path. ?Since each drive will >>> have 4 paths, plus the Power Path... ?after doing a "zfs import" how will >>> I >>> force it to use a specific path? ?Thanks again! ?Good to know that this >>> can >>> be done. >> >> I had in the last weeks a similar problem. I have on my testbed server >> (Solaris 10.x Update4) PowerPath 5.2 that it''s connected on two FC >> switch and then to Clariion CX3. >> >> Each LUN on the Clariion create 4 path to the host. I created 8 LUN, >> reconfigured Solaris for make them visible to the host and then tried >> to create a ZFS pool. I encountered a problem when I run the command: >> >> -- >> # root at solaris10# zpool status >> ?pool: tank >> ?state: ONLINE >> ?scrub: scrub completed with 0 errors on Mon Dec 14 05:00:01 2009 >> config: >> >> ? ? ? ?NAME ? ? ? ? ? ?STATE ? ? READ WRITE CKSUM >> ? ? ? ?tank ? ? ? ? ? ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> ? ? ? ? ?mirror ? ? ? ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> ? ? ? ? ? ?emcpower7a ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> ? ? ? ? ? ?emcpower5a ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> ? ? ? ? ?mirror ? ? ? ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> ? ? ? ? ? ?emcpower8a ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> ? ? ? ? ? ?emcpower6a ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> >> errors: No known data errors >> root at solaris10# zpool history >> History for ''tank'': >> 2009-12-10.20:19:17 zpool create -f tank mirror emcpower7a emcpower5a >> 2009-12-11.05:00:01 zpool scrub tank >> 2009-12-11.14:28:33 zpool add tank mirror emcpower8a emcpower6a >> 2009-12-14.05:00:01 zpool scrub tank >> >> root at solaris10# zpool add tank mirror emcpower3a emcpower1a >> internal error: Invalid argument >> Abort (core dumped) >> root at solaris# >> -- >> >> Next task will be to upgrade PowerPath (from 5.2 to 5.2 SP 2) and then >> retry the command to see if the problem (internal error) will be >> disappear. Anybody did have a similar problem? >> >> Cesare >-- Mike Ditka - "If God had wanted man to play soccer, he wouldn''t have given us arms." - http://www.brainyquote.com/quotes/authors/m/mike_ditka.html
Hy all, after upgrade PowerPath (from 5.2 to 5.2 SP 2) and then retry commands to create zpool, it was executed successfully: -- root at solaris10# zpool history History for ''tank'': 2009-12-15.14:37:00 zpool create -f tank mirror emcpower7a emcpower5a 2009-12-15.14:37:20 zpool add tank mirror emcpower8a emcpower6a 2009-12-15.14:37:56 zpool add tank mirror emcpower1a emcpower3a 2009-12-15.14:38:09 zpool add tank mirror emcpower2a emcpower4a root at solaris10# zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror ONLINE 0 0 0 emcpower7a ONLINE 0 0 0 emcpower5a ONLINE 0 0 0 mirror ONLINE 0 0 0 emcpower8a ONLINE 0 0 0 emcpower6a ONLINE 0 0 0 mirror ONLINE 0 0 0 emcpower1a ONLINE 0 0 0 emcpower3a ONLINE 0 0 0 mirror ONLINE 0 0 0 emcpower2a ONLINE 0 0 0 emcpower4a ONLINE 0 0 0 errors: No known data errors -- before PowerPath Version was 5.2.0.GA.b146, now 5.2.SP2.b012: -- root at solaris10# pkginfo -l EMCpower PKGINST: EMCpower NAME: EMC PowerPath (Patched with 5.2.SP2.b012) CATEGORY: system ARCH: sparc VERSION: 5.2.0_b146 BASEDIR: /opt VENDOR: EMC Corporation PSTAMP: beavis951018123443 INSTDATE: Dec 15 2009 12:53 STATUS: completely installed FILES: 339 installed pathnames 42 directories 123 executables 199365 blocks used (approx) -- So the SP2 incorporated the fix about PowerPath and ZFS using pseudo emcpower device. Cesare On Mon, Dec 14, 2009 at 9:12 PM, Cindy Swearingen <Cindy.Swearingen at sun.com> wrote:> Hi Cesare, > > According to our CR 6524163, this problem was fixed in PowerPath 5.0.2, but > then the problem reoccurred. > > According to the EMC PowerPath Release notes, here: > > www.emc.com/microsites/clariion-support/pdf/300-006-626.pdf > > This problem is fixed in 5.2 SP1. > > I would review the related ZFS information in this doc before proceeding. > > Thanks, > > Cindy > > On 12/14/09 03:53, Cesare wrote: >> >> On Wed, Dec 9, 2009 at 3:22 PM, Mike Johnston <mijohnst at gmail.com> wrote: >>> >>> Thanks for the info Alexander... I will test this out. ?I''m just >>> wondering >>> what it''s going to see after I install Power Path. ?Since each drive will >>> have 4 paths, plus the Power Path... ?after doing a "zfs import" how will >>> I >>> force it to use a specific path? ?Thanks again! ?Good to know that this >>> can >>> be done. >> >> I had in the last weeks a similar problem. I have on my testbed server >> (Solaris 10.x Update4) PowerPath 5.2 that it''s connected on two FC >> switch and then to Clariion CX3. >> >> Each LUN on the Clariion create 4 path to the host. I created 8 LUN, >> reconfigured Solaris for make them visible to the host and then tried >> to create a ZFS pool. I encountered a problem when I run the command: >> >> -- >> # root at solaris10# zpool status >> ?pool: tank >> ?state: ONLINE >> ?scrub: scrub completed with 0 errors on Mon Dec 14 05:00:01 2009 >> config: >> >> ? ? ? ?NAME ? ? ? ? ? ?STATE ? ? READ WRITE CKSUM >> ? ? ? ?tank ? ? ? ? ? ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> ? ? ? ? ?mirror ? ? ? ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> ? ? ? ? ? ?emcpower7a ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> ? ? ? ? ? ?emcpower5a ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> ? ? ? ? ?mirror ? ? ? ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> ? ? ? ? ? ?emcpower8a ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> ? ? ? ? ? ?emcpower6a ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> >> errors: No known data errors >> root at solaris10# zpool history >> History for ''tank'': >> 2009-12-10.20:19:17 zpool create -f tank mirror emcpower7a emcpower5a >> 2009-12-11.05:00:01 zpool scrub tank >> 2009-12-11.14:28:33 zpool add tank mirror emcpower8a emcpower6a >> 2009-12-14.05:00:01 zpool scrub tank >> >> root at solaris10# zpool add tank mirror emcpower3a emcpower1a >> internal error: Invalid argument >> Abort (core dumped) >> root at solaris# >> -- >> >> Next task will be to upgrade PowerPath (from 5.2 to 5.2 SP 2) and then >> retry the command to see if the problem (internal error) will be >> disappear. Anybody did have a similar problem? >> >> Cesare >-- Pablo Picasso - "Computers are useless. They can only give you answers." - http://www.brainyquote.com/quotes/authors/p/pablo_picasso.html
Great news. Thanks for letting us know. Cindy On 12/15/09 06:48, Cesare wrote:> Hy all, > > after upgrade PowerPath (from 5.2 to 5.2 SP 2) and then retry commands > to create zpool, it was executed successfully: > > -- > root at solaris10# zpool history > History for ''tank'': > 2009-12-15.14:37:00 zpool create -f tank mirror emcpower7a emcpower5a > 2009-12-15.14:37:20 zpool add tank mirror emcpower8a emcpower6a > 2009-12-15.14:37:56 zpool add tank mirror emcpower1a emcpower3a > 2009-12-15.14:38:09 zpool add tank mirror emcpower2a emcpower4a > root at solaris10# zpool status > pool: tank > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > mirror ONLINE 0 0 0 > emcpower7a ONLINE 0 0 0 > emcpower5a ONLINE 0 0 0 > mirror ONLINE 0 0 0 > emcpower8a ONLINE 0 0 0 > emcpower6a ONLINE 0 0 0 > mirror ONLINE 0 0 0 > emcpower1a ONLINE 0 0 0 > emcpower3a ONLINE 0 0 0 > mirror ONLINE 0 0 0 > emcpower2a ONLINE 0 0 0 > emcpower4a ONLINE 0 0 0 > > errors: No known data errors > -- > > before PowerPath Version was 5.2.0.GA.b146, now 5.2.SP2.b012: > > -- > root at solaris10# pkginfo -l EMCpower > PKGINST: EMCpower > NAME: EMC PowerPath (Patched with 5.2.SP2.b012) > CATEGORY: system > ARCH: sparc > VERSION: 5.2.0_b146 > BASEDIR: /opt > VENDOR: EMC Corporation > PSTAMP: beavis951018123443 > INSTDATE: Dec 15 2009 12:53 > STATUS: completely installed > FILES: 339 installed pathnames > 42 directories > 123 executables > 199365 blocks used (approx) > -- > > So the SP2 incorporated the fix about PowerPath and ZFS using pseudo > emcpower device. > > Cesare > > > On Mon, Dec 14, 2009 at 9:12 PM, Cindy Swearingen > <Cindy.Swearingen at sun.com> wrote: >> Hi Cesare, >> >> According to our CR 6524163, this problem was fixed in PowerPath 5.0.2, but >> then the problem reoccurred. >> >> According to the EMC PowerPath Release notes, here: >> >> www.emc.com/microsites/clariion-support/pdf/300-006-626.pdf >> >> This problem is fixed in 5.2 SP1. >> >> I would review the related ZFS information in this doc before proceeding. >> >> Thanks, >> >> Cindy >> >> On 12/14/09 03:53, Cesare wrote: >>> On Wed, Dec 9, 2009 at 3:22 PM, Mike Johnston <mijohnst at gmail.com> wrote: >>>> Thanks for the info Alexander... I will test this out. I''m just >>>> wondering >>>> what it''s going to see after I install Power Path. Since each drive will >>>> have 4 paths, plus the Power Path... after doing a "zfs import" how will >>>> I >>>> force it to use a specific path? Thanks again! Good to know that this >>>> can >>>> be done. >>> I had in the last weeks a similar problem. I have on my testbed server >>> (Solaris 10.x Update4) PowerPath 5.2 that it''s connected on two FC >>> switch and then to Clariion CX3. >>> >>> Each LUN on the Clariion create 4 path to the host. I created 8 LUN, >>> reconfigured Solaris for make them visible to the host and then tried >>> to create a ZFS pool. I encountered a problem when I run the command: >>> >>> -- >>> # root at solaris10# zpool status >>> pool: tank >>> state: ONLINE >>> scrub: scrub completed with 0 errors on Mon Dec 14 05:00:01 2009 >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> tank ONLINE 0 0 0 >>> mirror ONLINE 0 0 0 >>> emcpower7a ONLINE 0 0 0 >>> emcpower5a ONLINE 0 0 0 >>> mirror ONLINE 0 0 0 >>> emcpower8a ONLINE 0 0 0 >>> emcpower6a ONLINE 0 0 0 >>> >>> errors: No known data errors >>> root at solaris10# zpool history >>> History for ''tank'': >>> 2009-12-10.20:19:17 zpool create -f tank mirror emcpower7a emcpower5a >>> 2009-12-11.05:00:01 zpool scrub tank >>> 2009-12-11.14:28:33 zpool add tank mirror emcpower8a emcpower6a >>> 2009-12-14.05:00:01 zpool scrub tank >>> >>> root at solaris10# zpool add tank mirror emcpower3a emcpower1a >>> internal error: Invalid argument >>> Abort (core dumped) >>> root at solaris# >>> -- >>> >>> Next task will be to upgrade PowerPath (from 5.2 to 5.2 SP 2) and then >>> retry the command to see if the problem (internal error) will be >>> disappear. Anybody did have a similar problem? >>> >>> Cesare > > >
Just thought I would let you all know that I followed what Alex suggested along with what many of you pointed out and it worked! Here are the steps I followed: 1. Break root drive mirror 2. zpool export filesystem 3. run the command to start MPIOX and reboot the machine 4. zpool import filesystem 5. Check the system 6. Recreate the mirror. Thank you all for the help! I feel much better and it worked without a single problem! I''m very impressed with MPXIO and wish I had known about it before spending thousands of dollars on PowerPath. -- This message posted from opensolaris.org
Mike wrote:> Just thought I would let you all know that I followed what Alex suggested > along with what many of you pointed out and it worked! Here are the steps > I followed: > > 1. Break root drive mirror > 2. zpool export filesystem > 3. run the command to start MPIOX and reboot the machine > 4. zpool import filesystem > 5. Check the system > 6. Recreate the mirror. > > Thank you all for the help! I feel much better and it worked without a > single problem! I''m very impressed with MPXIO and wish I had known about > it before spending thousands of dollars on PowerPath.As somebody who''s done a bunch of work on stmsboot[a], and who has at least a passing knowledge of devids[b] (which are what ZFS and MPxIO use to identify devices), I am disappointed that you believe it was necessary to follow the above steps. Assuming that your devices do not have devids which change, then all that should have been required was [setup your root mirror] # /usr/sbin/stmsboot -e [reboot when prompted] [twiddle thumbs] [ login ] No ZFS export and import required. No breaking and recreating of mirror required. [a] http://blogs.sun.com/jmcp/entry/on_stmsboot_1m [b] http://www.jmcp.homeunix.com/~jmcp/WhatIsAGuid.pdf James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog