Ok, so I''m planning on wiping my test pool that seems to have problems with non-spare disks being marked as spares, but I can''t destroy it: # zpool destroy -f zmir cannot iterate filesystems: I/O error Anyone know how I can nuke this for good? Jim This message posted from opensolaris.org
BTW, I''m also unable to export the pool -- same error. Jim This message posted from opensolaris.org
Here''s the truss output: 402: ioctl(3, ZFS_IOC_POOL_LOG_HISTORY, 0x080427B8) = 0 402: ioctl(3, ZFS_IOC_OBJSET_STATS, 0x0804192C) = 0 402: ioctl(3, ZFS_IOC_DATASET_LIST_NEXT, 0x0804243C) = 0 402: ioctl(3, ZFS_IOC_DATASET_LIST_NEXT, 0x0804243C) Err#3 ESRCH 402: ioctl(3, ZFS_IOC_SNAPSHOT_LIST_NEXT, 0x0804243C) Err#5 EIO 402: fstat64(2, 0x08041400) = 0 cannot iterate filesystems402: write(2, " c a n n o t i t e r a".., 26) = 26 : 402: write(2, " : ", 2) = 2 I/O error402: write(2, " I / O e r r o r", 9) = 9 I did take one snapshot before my last disk spindown. Should I try nuking it? Jim This message posted from opensolaris.org
Nevermind: # zfs destroy zmir at 2006-11-30-10:28 cannot open ''zmir at 2006-11-30-10:28'': I/O error Jim This message posted from opensolaris.org
You are likely hitting: 6397052 unmounting datasets should process /etc/mnttab instead of traverse DSL Which was fixed in build 46 of Nevada. In the meantime, you can remove /etc/zfs/zpool.cache manually and reboot, which will remove all your pools (which you can then re-import on an individual basis). - Eric On Mon, Dec 11, 2006 at 06:58:22AM -0800, Jim Hranicky wrote:> Ok, so I''m planning on wiping my test pool that seems to have problems > with non-spare disks being marked as spares, but I can''t destroy it: > > # zpool destroy -f zmir > cannot iterate filesystems: I/O error > > Anyone know how I can nuke this for good? > > Jim > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
> You are likely hitting: > > 6397052 unmounting datasets should process > /etc/mnttab instead of traverse DSL > > Which was fixed in build 46 of Nevada. In the > meantime, you can remove > /etc/zfs/zpool.cache manually and reboot, which will > remove all your > pools (which you can then re-import on an individual > basis).I''m running b51, but I''ll try deleting the cache. Jim This message posted from opensolaris.org
This worked. I''ve restarted my testing but I''ve been fdisking each drive before I add it to the pool, and so far the system is behaving as expected when I spin a drive down, i.e., the hot spare gets automatically used. This makes me wonder if it''s possible to ensure that the forced addition of a drive to a pool wipes the pool of any previous data, especially any zfs metadata. I''ll keep the list posted as I continue my tests. Jim This message posted from opensolaris.org
Seemingly Similar Threads
- zfs list hangs if zfs send is killed (leaving zfs receive process)
- a problem in building dovecot @ opensolaris
- Small problem with src/lib/mountpoint.c [now with patch attached!]
- /dev/zfs ioctl performance
- Problem w/ b95 + ZFS (version 11) - seeing fair number of errors on multiple machines