search for: zfs_ioc_snapshot_list_next

Displaying 5 results from an estimated 5 matches for "zfs_ioc_snapshot_list_next".

2007 Jul 14
3
zfs list hangs if zfs send is killed (leaving zfs receive process)
...I wanted to terminate the the zfs send process. I killed it, but the zfs receive doesn''t want to die... In the meantime my zfs list command just hangs. Here is the tail end of the truss output from a "truss zfs list": ioctl(3, ZFS_IOC_OBJSET_STATS, 0x08043484) = 0 ioctl(3, ZFS_IOC_SNAPSHOT_LIST_NEXT, 0x08045788) Err#3 ESRCH ioctl(3, ZFS_IOC_DATASET_LIST_NEXT, 0x08046950) = 0 ioctl(3, ZFS_IOC_OBJSET_STATS, 0x0804464C) = 0 ioctl(3, ZFS_IOC_DATASET_LIST_NEXT, 0x08045788) Err#3 ESRCH ioctl(3, ZFS_IOC_SNAPSHOT_LIST_NEXT, 0x08045788) = 0 ioctl(3, ZFS_IOC_OBJSET_STATS, 0x08043484) = 0 ioctl...
2008 Oct 09
0
"zfs set sharenfs" takes a long time to return.
...33.556 6081241 10260 usr time: 41.354 elapsed: 386.130 Looking at exactly which ioctls get called: 5991763 MNTIOC_GETMNTENT 6782 TCGETA 6892 ZFS_IOC_DATASET_LIST_NEXT 3466 ZFS_IOC_OBJSET_STATS 2 ZFS_IOC_POOL_CONFIGS 1 ZFS_IOC_SET_PROP 1 ZFS_IOC_SHARE 8 ZFS_IOC_SNAPSHOT_LIST_NEXT Is anyone else experiencing this, or does anyone have any ideas on how to solve it? Is this solved in more recently snv releases? (And are there changelogs posted somewhere?) Thanks! -Laen -- This message posted from opensolaris.org
2006 Dec 11
6
Can''t destroy corrupted pool
Ok, so I''m planning on wiping my test pool that seems to have problems with non-spare disks being marked as spares, but I can''t destroy it: # zpool destroy -f zmir cannot iterate filesystems: I/O error Anyone know how I can nuke this for good? Jim This message posted from opensolaris.org
2008 Jun 16
3
[Bug 2247] New: tests/functional/cli_root/zpool_upgrade/ zpool_upgrade_007_pos panics - zfs snapshot
...:die+ea () ffffff000fc4c7a0 unix:trap+3d0 () ffffff000fc4c7b0 unix:_cmntrap+e9 () ffffff000fc4c8e0 zfs:zap_leaf_lookup_closest+3e () ffffff000fc4c970 zfs:fzap_cursor_retrieve+ce () ffffff000fc4ca20 zfs:zap_cursor_retrieve+145 () ffffff000fc4cbe0 zfs:dmu_snapshot_list_next+7e () ffffff000fc4cc30 zfs:zfs_ioc_snapshot_list_next+a2 () ffffff000fc4ccb0 zfs:zfsdev_ioctl+140 () ffffff000fc4ccf0 genunix:cdev_ioctl+48 () ffffff000fc4cd30 specfs:spec_ioctl+86 () ffffff000fc4cdb0 genunix:fop_ioctl+7b () ffffff000fc4cec0 genunix:ioctl+174 () ffffff000fc4cf10 unix:brand_sys_syscall32+197 () Test log: /net/tas.sfbay/export/projects...
2009 Apr 01
4
ZFS Locking Up periodically
I''ve recently re-installed an X4500 running Nevada b109 and have been experiencing ZFS lock ups regularly (perhaps once every 2-3 days). The machine is a backup server and receives hourly ZFS snapshots from another thumper - as such, the amount of zfs activity tends to be reasonably high. After about 48 - 72 hours, the file system seems to lock up and I''m unable to do anything