Hi all, I''m trying to delete a zpool and when I do, I get this error: # zpool destroy oradata_fs1 cannot open ''oradata_fs1'': I/O error # The pools I have on this box look like this: #zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT oradata_fs1 532G 119K 532G 0% DEGRADED - rpool 136G 28.6G 107G 21% ONLINE - # Why can''t I delete this pool? This is on Solaris 10 5/09 s10s_u7.
On Fri, Mar 19, 2010 at 1:26 PM, Grant Lowe <glowe at sbcglobal.net> wrote:> Hi all, > > I''m trying to delete a zpool and when I do, I get this error: > > # zpool destroy oradata_fs1 > cannot open ''oradata_fs1'': I/O error > # > > The pools I have on this box look like this: > > #zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > oradata_fs1 532G 119K 532G 0% DEGRADED - > rpool 136G 28.6G 107G 21% ONLINE - > # > > Why can''t I delete this pool? This is on Solaris 10 5/09 s10s_u7. >Please send the result of zpool status. Your devices are probably all offline but that shouldn''t stop you from removing it, at least not on OpenSolaris. -- Giovanni -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100319/c3ae792d/attachment.html>
Hi Grant, An I/O error generally means that there is some problem either accessing the disk or disks in this pool, or a disk label got clobbered. Does zpool status provide any clues about what''s wrong with this pool? Thanks, Cindy On 03/19/10 10:26, Grant Lowe wrote:> Hi all, > > I''m trying to delete a zpool and when I do, I get this error: > > # zpool destroy oradata_fs1 > cannot open ''oradata_fs1'': I/O error > # > > The pools I have on this box look like this: > > #zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > oradata_fs1 532G 119K 532G 0% DEGRADED - > rpool 136G 28.6G 107G 21% ONLINE - > # > > Why can''t I delete this pool? This is on Solaris 10 5/09 s10s_u7. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Cindy,
Here''s the zpool status:
]# zpool status -v
pool: oradata_fs1
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scrub: scrub completed after 0h0m with 1 errors on Thu Mar 18 17:00:12 2010
config:
NAME STATE READ WRITE CKSUM
oradata_fs1 DEGRADED 0 0 26
c4t60060E80143997000001399700000030d0 DEGRADED 0 0 128 too
many errors
errors: Permanent errors have been detected in the following files:
oradata_fs1:<0x0>
#
That doesn''t really seem to help. For what it''s worth,. this
is a SAN LUN. What am I missing?
----- Original Message ----
From: Cindy Swearingen <Cindy.Swearingen at Sun.COM>
To: Grant Lowe <glowe at sbcglobal.net>
Cc: zfs-discuss at opensolaris.org
Sent: Fri, March 19, 2010 10:21:45 AM
Subject: Re: [zfs-discuss] zpool I/O error
Hi Grant,
An I/O error generally means that there is some problem either accessing the
disk or disks in this pool, or a disk label got clobbered.
Does zpool status provide any clues about what''s wrong with this pool?
Thanks,
Cindy
On 03/19/10 10:26, Grant Lowe wrote:> Hi all,
>
> I''m trying to delete a zpool and when I do, I get this error:
>
> # zpool destroy oradata_fs1
> cannot open ''oradata_fs1'': I/O error
> #
> The pools I have on this box look like this:
>
> #zpool list
> NAME SIZE USED AVAIL CAP HEALTH ALTROOT
> oradata_fs1 532G 119K 532G 0% DEGRADED -
> rpool 136G 28.6G 107G 21% ONLINE -
> #
>
> Why can''t I delete this pool? This is on Solaris 10 5/09 s10s_u7.
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss