We deleted the mirror in the HW RAID, now zfs thinks device is not available. we''re using the same device name, c1t0d0. How do we recover?? RAID INFO before: # raidctl RAID Volume RAID RAID Disk Volume Type Status Disk Status ------------------------------------------------------ c1t0d0 IM OK c1t0d0 OK c1t1d0 OK RAID INFO after: # raidctl No RAID volumes found itsm-mpk-2# zpool status -x pool: canary state: FAULTED status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using ''zpool online''. see: http://www.sun.com/msg/ZFS-8000-D3 scrub: none requested config: NAME STATE READ WRITE CKSUM canary UNAVAIL 0 0 0 insufficient replicas c1t0d0s3 UNAVAIL 0 0 0 cannot open Thanks, karen -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This suggests that both vn_open(''/dev/dsk/c1t0d0s3'') and ldi_open_by_devid(''.....'') are failing for this device. Are you sure that this device exists and is readable? What does ''zdb -L /dev/dsk/c1t0d0s3'' show? You can try a ''zpool export'' and ''zpool import'' to see if it finds the device, but ZFS should be able to handle this type of devid change. - Eric On Mon, Jul 24, 2006 at 01:26:44PM -0700, Karen Chau wrote:> We deleted the mirror in the HW RAID, now zfs thinks device is not > available. we''re using the same device name, c1t0d0. How do we recover?? > > RAID INFO before: > # raidctl > RAID Volume RAID RAID Disk > Volume Type Status Disk Status > ------------------------------------------------------ > c1t0d0 IM OK c1t0d0 OK > c1t1d0 OK > > RAID INFO after: > # raidctl > No RAID volumes found > > itsm-mpk-2# zpool status -x > pool: canary > state: FAULTED > status: One or more devices could not be opened. There are insufficient > replicas for the pool to continue functioning. > action: Attach the missing device and online it using ''zpool online''. > see: http://www.sun.com/msg/ZFS-8000-D3 > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > canary UNAVAIL 0 0 0 insufficient replicas > c1t0d0s3 UNAVAIL 0 0 0 cannot open > > > Thanks, > karen > > -- > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > NOTICE: This email message is for the sole use of the intended recipient(s) > and may contain confidential and privileged information. Any unauthorized > review, use, disclosure or distribution is prohibited. If you are not the > intended recipient, please contact the sender by reply email and destroy all > copies of the original message. > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Don''t know why zfs cannot open device? itsm-mpk-2# zpool import pool: canary id: 9275088414963579563 state: ONLINE action: The pool can be imported using its name or numeric identifier. The pool may be active on on another system, but can be imported using the ''-f'' flag. config: canary ONLINE c1t0d0s3 ONLINE itsm-mpk-2# zpool import canary cannot import ''canary'': pool may be in use from other system use ''-f'' to import anyway itsm-mpk-2# zpool import -f canary cannot import ''canary'': pool exists itsm-mpk-2# zdb -L /dev/dsk/c1t0d0s3 zdb: can''t open /dev/dsk/c1t0d0s3: error 22 itsm-mpk-2# itsm-mpk-2# ls -l /dev/dsk/c1t0d0s3 lrwxrwxrwx 1 root root 65 Apr 4 14:46 /dev/dsk/c1t0d0s3 -> ../../devices/pci at 7c0/pci at 0/pci at 1/pci at 0,2/LSILogic,sas at 2/sd at 0,0:d itsm-mpk-2# ls -l ../../devices/pci at 7c0/pci at 0/pci at 1/pci at 0,2/LSILogic,sas at 2/sd at 0,0:d brw-r----- 1 root sys 32, 11 Jul 24 11:47 ../../devices/pci at 7c0/pci at 0/pci at 1/pci at 0,2/LSILogic,sas at 2/sd at 0,0:d Eric Schrock wrote On 07/24/06 13:32,:> This suggests that both vn_open(''/dev/dsk/c1t0d0s3'') and > ldi_open_by_devid(''.....'') are failing for this device. Are you sure > that this device exists and is readable? What does ''zdb -L > /dev/dsk/c1t0d0s3'' show? > > You can try a ''zpool export'' and ''zpool import'' to see if it finds the > device, but ZFS should be able to handle this type of devid change. > > - Eric > > On Mon, Jul 24, 2006 at 01:26:44PM -0700, Karen Chau wrote: > >>We deleted the mirror in the HW RAID, now zfs thinks device is not >>available. we''re using the same device name, c1t0d0. How do we recover?? >> >>RAID INFO before: >># raidctl >>RAID Volume RAID RAID Disk >>Volume Type Status Disk Status >>------------------------------------------------------ >>c1t0d0 IM OK c1t0d0 OK >> c1t1d0 OK >> >>RAID INFO after: >># raidctl >>No RAID volumes found >> >>itsm-mpk-2# zpool status -x >> pool: canary >> state: FAULTED >>status: One or more devices could not be opened. There are insufficient >> replicas for the pool to continue functioning. >>action: Attach the missing device and online it using ''zpool online''. >> see: http://www.sun.com/msg/ZFS-8000-D3 >> scrub: none requested >>config: >> >> NAME STATE READ WRITE CKSUM >> canary UNAVAIL 0 0 0 insufficient replicas >> c1t0d0s3 UNAVAIL 0 0 0 cannot open >> >> >>Thanks, >>karen >> >>-- >>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>NOTICE: This email message is for the sole use of the intended recipient(s) >>and may contain confidential and privileged information. Any unauthorized >>review, use, disclosure or distribution is prohibited. If you are not the >>intended recipient, please contact the sender by reply email and destroy all >>copies of the original message. >>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> >>_______________________________________________ >>zfs-discuss mailing list >>zfs-discuss at opensolaris.org >>http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To fix the ''pool exists'' error, try a ''zpool export -f canary''. That will clean out the old information. Then try the ''zpool import -f canary''. If the device name changes, the above will usually fix the problem. On Jul 24, 2006, at 3:22 PM, Karen Chau wrote:> Don''t know why zfs cannot open device? > > itsm-mpk-2# zpool import > pool: canary > id: 9275088414963579563 > state: ONLINE > action: The pool can be imported using its name or numeric > identifier. The > pool may be active on on another system, but can be > imported using > the ''-f'' flag. > config: > > canary ONLINE > c1t0d0s3 ONLINE > itsm-mpk-2# zpool import canary > cannot import ''canary'': pool may be in use from other system > use ''-f'' to import anyway > itsm-mpk-2# zpool import -f canary > cannot import ''canary'': pool exists > > > itsm-mpk-2# zdb -L /dev/dsk/c1t0d0s3 > zdb: can''t open /dev/dsk/c1t0d0s3: error 22 > itsm-mpk-2# > itsm-mpk-2# ls -l /dev/dsk/c1t0d0s3 > lrwxrwxrwx 1 root root 65 Apr 4 14:46 /dev/dsk/ > c1t0d0s3 > -> ../../devices/pci at 7c0/pci at 0/pci at 1/pci at 0,2/LSILogic,sas at 2/sd at 0,0:d > itsm-mpk-2# ls -l > ../../devices/pci at 7c0/pci at 0/pci at 1/pci at 0,2/LSILogic,sas at 2/sd at 0,0:d > brw-r----- 1 root sys 32, 11 Jul 24 11:47 > ../../devices/pci at 7c0/pci at 0/pci at 1/pci at 0,2/LSILogic,sas at 2/sd at 0,0:d > > > Eric Schrock wrote On 07/24/06 13:32,: >> This suggests that both vn_open(''/dev/dsk/c1t0d0s3'') and >> ldi_open_by_devid(''.....'') are failing for this device. Are you sure >> that this device exists and is readable? What does ''zdb -L >> /dev/dsk/c1t0d0s3'' show? >> >> You can try a ''zpool export'' and ''zpool import'' to see if it finds >> the >> device, but ZFS should be able to handle this type of devid change. >> >> - Eric >> >> On Mon, Jul 24, 2006 at 01:26:44PM -0700, Karen Chau wrote: >> >>> We deleted the mirror in the HW RAID, now zfs thinks device is not >>> available. we''re using the same device name, c1t0d0. How do we >>> recover?? >>> >>> RAID INFO before: >>> # raidctl >>> RAID Volume RAID RAID Disk >>> Volume Type Status Disk Status >>> ------------------------------------------------------ >>> c1t0d0 IM OK c1t0d0 OK >>> c1t1d0 OK >>> >>> RAID INFO after: >>> # raidctl >>> No RAID volumes found >>> >>> itsm-mpk-2# zpool status -x >>> pool: canary >>> state: FAULTED >>> status: One or more devices could not be opened. There are >>> insufficient >>> replicas for the pool to continue functioning. >>> action: Attach the missing device and online it using ''zpool >>> online''. >>> see: http://www.sun.com/msg/ZFS-8000-D3 >>> scrub: none requested >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> canary UNAVAIL 0 0 0 insufficient replicas >>> c1t0d0s3 UNAVAIL 0 0 0 cannot open >>> >>> >>> Thanks, >>> karen >>> >>> -- >>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>> ~~~~~~~~ >>> NOTICE: This email message is for the sole use of the intended >>> recipient(s) >>> and may contain confidential and privileged information. Any >>> unauthorized >>> review, use, disclosure or distribution is prohibited. If you >>> are not the >>> intended recipient, please contact the sender by reply email and >>> destroy all >>> copies of the original message. >>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>> ~~~~~~~~ >>> >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> >> -- >> Eric Schrock, Solaris Kernel Development http:// >> blogs.sun.com/eschrock >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > -- > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > ~~~~~~ > NOTICE: This email message is for the sole use of the intended > recipient(s) > and may contain confidential and privileged information. Any > unauthorized > review, use, disclosure or distribution is prohibited. If you are > not the > intended recipient, please contact the sender by reply email and > destroy all > copies of the original message. > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > ~~~~~~ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss----- Gregory Shaw, IT Architect Phone: (303) 673-8273 Fax: (303) 673-2773 ITCTO Group, Sun Microsystems Inc. 1 StorageTek Drive ULVL4-382 greg.shaw at sun.com (work) Louisville, CO 80028-4382 shaw at fmsoft.com (home) "When Microsoft writes an application for Linux, I''ve Won." - Linus Torvalds -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060724/023218bd/attachment.html>
On Mon, Jul 24, 2006 at 02:22:31PM -0700, Karen Chau wrote:> Don''t know why zfs cannot open device? > > itsm-mpk-2# zpool import > pool: canary > id: 9275088414963579563 > state: ONLINE > action: The pool can be imported using its name or numeric identifier. The > pool may be active on on another system, but can be imported using > the ''-f'' flag. > config: > > canary ONLINE > c1t0d0s3 ONLINE > itsm-mpk-2# zpool import canary > cannot import ''canary'': pool may be in use from other system > use ''-f'' to import anyway > itsm-mpk-2# zpool import -f canary > cannot import ''canary'': pool exists > > > itsm-mpk-2# zdb -L /dev/dsk/c1t0d0s3 > zdb: can''t open /dev/dsk/c1t0d0s3: error 22^^^^^^^^^^^^^^^^^^^^^^^^^^^ Well this is at least a little comforting - it suggests that you have an underlying problem. Perhaps you should try running ''devfsadm'' to re-create your /dev links. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
On Mon, 24 Jul 2006, Karen Chau wrote:> itsm-mpk-2# zdb -L /dev/dsk/c1t0d0s3 > zdb: can''t open /dev/dsk/c1t0d0s3: error 22Assuming error 22 is referring to an errno value, can we please have the text of the message instead of the number (or at least as well as)? I can''t be the only person that finds "Invalid argument" more friendly than "error 22"... -- Rich Teer, SCNA, SCSA, OpenSolaris CAB member President, Rite Online Inc. Voice: +1 (250) 979-1638 URL: http://www.rite-group.com/rich
"all" the zfs commands are hung :-( I tried to kill pid 1942, won''t die. # ps -ef |grep zpool root 1909 1171 0 14:33:26 pts/1 0:00 zpool import -f canary root 1942 1920 0 14:37:17 pts/2 0:00 zpool status -x root 2113 1987 0 15:00:28 pts/3 0:00 grep zpool # date Mon Jul 24 15:00:29 PDT 2006 We also experienced another issue last Friday, it took us almost 3 hrs to boot our server... here''s part of the boot messages from /var/adm/messags Jul 21 18:55:05 itsm-mpk-2 genunix: [ID 454863 kern.info] dump on /dev/dsk/c1t0d0s1 size 2100 MB Jul 21 18:55:06 itsm-mpk-2 genunix: [ID 408822 kern.info] NOTICE: ipge0: no fault external to device; service available Jul 21 18:55:06 itsm-mpk-2 genunix: [ID 611667 kern.info] NOTICE: ipge0: xcvr addr:0x01 - link up 100 Mbps full duplex Jul 21 18:55:06 itsm-mpk-2 pseudo: [ID 129642 kern.info] pseudo-device: zfs0 Jul 21 18:55:06 itsm-mpk-2 genunix: [ID 936769 kern.info] zfs0 is /pseudo/zfs at 0 Jul 21 18:55:08 itsm-mpk-2 pseudo: [ID 129642 kern.info] pseudo-device: dtrace0 Jul 21 18:55:08 itsm-mpk-2 genunix: [ID 936769 kern.info] dtrace0 is /pseudo/dtrace at 0 Jul 21 21:45:09 itsm-mpk-2 pseudo: [ID 129642 kern.info] pseudo-device: devinfo0 Jul 21 21:45:09 itsm-mpk-2 genunix: [ID 936769 kern.info] devinfo0 is /pseudo/devinfo at 0 Jul 21 21:45:09 itsm-mpk-2 pseudo: [ID 129642 kern.info] pseudo-device: pm0 Jul 21 21:45:09 itsm-mpk-2 genunix: [ID 936769 kern.info] pm0 is /pseudo/pm at 0 Jul 21 21:45:09 itsm-mpk-2 pseudo: [ID 129642 kern.info] pseudo-device: mdesc0 Jul 21 21:45:09 itsm-mpk-2 genunix: [ID 936769 kern.info] mdesc0 is /pseudo/mdesc at 0 Jul 21 21:45:10 itsm-mpk-2 px_pci: [ID 370704 kern.info] PCI-device: ide at 8, uata0 Eric Schrock wrote On 07/24/06 14:32,:> On Mon, Jul 24, 2006 at 02:22:31PM -0700, Karen Chau wrote: > >>Don''t know why zfs cannot open device? >> >>itsm-mpk-2# zpool import >> pool: canary >> id: 9275088414963579563 >> state: ONLINE >>action: The pool can be imported using its name or numeric identifier. The >> pool may be active on on another system, but can be imported using >> the ''-f'' flag. >>config: >> >> canary ONLINE >> c1t0d0s3 ONLINE >>itsm-mpk-2# zpool import canary >>cannot import ''canary'': pool may be in use from other system >>use ''-f'' to import anyway >>itsm-mpk-2# zpool import -f canary >>cannot import ''canary'': pool exists >> >> >>itsm-mpk-2# zdb -L /dev/dsk/c1t0d0s3 >>zdb: can''t open /dev/dsk/c1t0d0s3: error 22 > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > Well this is at least a little comforting - it suggests that you have an > underlying problem. Perhaps you should try running ''devfsadm'' to > re-create your /dev links. > > - Eric > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
On Mon, Jul 24, 2006 at 03:01:04PM -0700, Karen Chau wrote:> "all" the zfs commands are hung :-( > I tried to kill pid 1942, won''t die. > > # ps -ef |grep zpool > root 1909 1171 0 14:33:26 pts/1 0:00 zpool import -f canary > root 1942 1920 0 14:37:17 pts/2 0:00 zpool status -x > root 2113 1987 0 15:00:28 pts/3 0:00 grep zpool > # date > Mon Jul 24 15:00:29 PDT 2006What build are you running? If this is within Sun someone from the ZFS team can go poke at it if you provide a root login (send mail to zfs-team at sun dot com to find a candidate). You can also try: # mdb -k> ::pgrep zpool | ::walk thread | ::findstackTo get an idea of where the threads are stuck. But most likely they are waiting on some ZIO or lock, and it would be much faster to let a ZFS specialist take a look. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Karen, This looks like you were using the internal raid on a T2000, is that right? If so is it possible that you did not re-label the drives after you deleted the volume? After deleting a raid volume using the onboard controller you must relabel the affected drives. The 1064 controller utilizes a 64MB on-disk metadata region when you create a volume which alters the disk geometry used. So when you create or delete a volume re-labeling is required. Not sure if this could create the situation you describe here but figured it is worth checking just in case this was not done. Regards -Mark D. Karen Chau wrote:> We deleted the mirror in the HW RAID, now zfs thinks device is not > available. we''re using the same device name, c1t0d0. How do we recover?? > > RAID INFO before: > # raidctl > RAID Volume RAID RAID Disk > Volume Type Status Disk Status > ------------------------------------------------------ > c1t0d0 IM OK c1t0d0 OK > c1t1d0 OK > > RAID INFO after: > # raidctl > No RAID volumes found > > itsm-mpk-2# zpool status -x > pool: canary > state: FAULTED > status: One or more devices could not be opened. There are insufficient > replicas for the pool to continue functioning. > action: Attach the missing device and online it using ''zpool online''. > see: http://www.sun.com/msg/ZFS-8000-D3 > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > canary UNAVAIL 0 0 0 insufficient replicas > c1t0d0s3 UNAVAIL 0 0 0 cannot open > > > Thanks, > karen > >