# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
jira-app-zpool 272G 330K 272G 0% ONLINE -
The following command hangs forever. If I reboot the box , zpool list shows
online as I mentioned the output above.
# zpool destroy -f jira-app-zpool
How can get rid of this pool and any reference to it.
bash-3.00# zpool status
pool: jira-app-zpool
state: UNAVAIL
status: One or more devices are faultd in response to IO failures.
action: Make sure the affected devices are connected, then run ''zpool
clear''.
see: http://www.sun.com/msg/ZFS-8000-HC
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
jira-app-zpool UNAVAIL 0 0 4 insufficient replicas
c3t0d3 FAULTED 0 0 4 experienced I/O failures
errors: 2 data errors, use ''-v'' for a list
bash-3.00#
--
This message posted from opensolaris.org
Can you share the output of ''uname -a'' and the disk controller you are using? On Sun, Jan 25, 2009 at 6:24 PM, Ramesh Mudradi <rameshm.kumar at gmail.com> wrote:> # zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > jira-app-zpool 272G 330K 272G 0% ONLINE - > > The following command hangs forever. If I reboot the box , zpool list shows online as I mentioned the output above. > > # zpool destroy -f jira-app-zpool > > How can get rid of this pool and any reference to it. > > bash-3.00# zpool status > pool: jira-app-zpool > state: UNAVAIL > status: One or more devices are faultd in response to IO failures. > action: Make sure the affected devices are connected, then run ''zpool clear''. > see: http://www.sun.com/msg/ZFS-8000-HC > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > jira-app-zpool UNAVAIL 0 0 4 insufficient replicas > c3t0d3 FAULTED 0 0 4 experienced I/O failures > > errors: 2 data errors, use ''-v'' for a list > bash-3.00# > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
bash-3.00# uname -a SunOS opf-01 5.10 Generic_138888-01 sun4v sparc SUNW,T5140 It has dual port SAS HBA connected to a dual controller ST2530. Storage is connected to two 5140''s. Tried exporting the pool to other node and tried destroying without any luck. thanks ramesh -- This message posted from opensolaris.org