search for: zfsdev_ioctl

Displaying 18 results from an estimated 18 matches for "zfsdev_ioctl".

2007 Nov 27
0
zpool detech hangs causes other zpool commands, format, df etc.. to hang
...f8 CMD: zpool detach disk1 c6t7d0 stack pointer for thread fffffe84d34b4920: fffffe8001c30c10 [ fffffe8001c30c10 _resume_from_idle+0xf8() ] swtch+0x110() cv_wait+0x68() spa_config_enter+0x50() spa_vdev_enter+0x2a() spa_vdev_detach+0x39() zfs_ioc_vdev_detach+0x48() zfsdev_ioctl+0x13e() cdev_ioctl+0x1d() spec_ioctl+0x50() fop_ioctl+0x25() ioctl+0xac() sys_syscall32+0x101() Other zpool commands, df, format all waiting on a mutex lock spa_namespace_lock to release: PC: _resume_from_idle+0xf8 CMD: zpool status stack pointer for thread fffffe84d34...
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
..._common+15b () Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047ed00 zfs:spa_get_stats+42 () Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047ed40 zfs:zfs_ioc_pool_stats+3f () Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047ed80 zfs:zfsdev_ioctl+144 () Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047ed90 genunix:cdev_ioctl+1d () Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047edb0 specfs:spec_ioctl+50 () Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047ede0 genunix:fop_ioc...
2010 Feb 08
5
zfs send/receive : panic and reboot
...c4810 zfs:dmu_objset_find+40 () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c4a70 zfs:dmu_recv_stream+448 () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c4c40 zfs:zfs_ioc_recv+41d () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c4cc0 zfs:zfsdev_ioctl+175 () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c4d00 genunix:cdev_ioctl+45 () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c4d40 specfs:spec_ioctl+5a () Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ffffff00053c4dc0 genunix:fop_ioctl+7b () Feb 8...
2008 May 20
7
[Bug 1986] New: ''zfs destroy'' hangs on encrypted dataset
http://defect.opensolaris.org/bz/show_bug.cgi?id=1986 Summary: ''zfs destroy'' hangs on encrypted dataset Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: major Priority: P2 Component: other
2008 Jun 16
3
[Bug 2247] New: tests/functional/cli_root/zpool_upgrade/ zpool_upgrade_007_pos panics - zfs snapshot
...fc4c7b0 unix:_cmntrap+e9 () ffffff000fc4c8e0 zfs:zap_leaf_lookup_closest+3e () ffffff000fc4c970 zfs:fzap_cursor_retrieve+ce () ffffff000fc4ca20 zfs:zap_cursor_retrieve+145 () ffffff000fc4cbe0 zfs:dmu_snapshot_list_next+7e () ffffff000fc4cc30 zfs:zfs_ioc_snapshot_list_next+a2 () ffffff000fc4ccb0 zfs:zfsdev_ioctl+140 () ffffff000fc4ccf0 genunix:cdev_ioctl+48 () ffffff000fc4cd30 specfs:spec_ioctl+86 () ffffff000fc4cdb0 genunix:fop_ioctl+7b () ffffff000fc4cec0 genunix:ioctl+174 () ffffff000fc4cf10 unix:brand_sys_syscall32+197 () Test log: /net/tas.sfbay/export/projects/zfs-crypto/Results/zfs-tests/cli_root/z...
2007 Nov 25
2
Corrupted pool
...a1cc0 zfs:spa_open_common+15b () Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1d00 zfs:spa_get_stats+42 () Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1d40 zfs:zfs_ioc_pool_stats+3f () Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1d80 zfs:zfsdev_ioctl+146 () Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1d90 genunix:cdev_ioctl+1d () Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1db0 specfs:spec_ioctl+50 () Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1de0 genunix:fop_ioctl+25 () Nov 24 04...
2010 Aug 30
5
pool died during scrub
...ait+0x2d() fffffe80007cbb40 arc_read_nolock+0x668() fffffe80007cbbd0 dmu_objset_open_impl+0xcf() fffffe80007cbc20 dsl_pool_open+0x4e() fffffe80007cbcc0 spa_load+0x307() fffffe80007cbd00 spa_open_common+0xf7() fffffe80007cbd10 spa_open+0xb() fffffe80007cbd30 pool_status_check+0x19() fffffe80007cbd80 zfsdev_ioctl+0x1b1() fffffe80007cbd90 cdev_ioctl+0x1d() fffffe80007cbdb0 spec_ioctl+0x50() fffffe80007cbde0 fop_ioctl+0x25() fffffe80007cbec0 ioctl+0xac() fffffe80007cbf10 _sys_sysenter_post_swapgs+0x14b() pool: srv id: 9515618289022845993 state: UNAVAIL status: One or more devices are missing from the...
2007 Aug 26
3
Kernel panic receiving incremental snapshots
..._close+1d () Aug 25 17:01:50 ldasdata6 genunix: [ID 655072 kern.notice] fffffe8001112d20 zfs:dmu_recvbackup+5b5 () Aug 25 17:01:50 ldasdata6 genunix: [ID 655072 kern.notice] fffffe8001112d40 zfs:zfs_ioc_recvbackup+45 () Aug 25 17:01:50 ldasdata6 genunix: [ID 655072 kern.notice] fffffe8001112d80 zfs:zfsdev_ioctl+146 () Aug 25 17:01:50 ldasdata6 genunix: [ID 655072 kern.notice] fffffe8001112d90 genunix:cdev_ioctl+1d () Aug 25 17:01:50 ldasdata6 genunix: [ID 655072 kern.notice] fffffe8001112db0 specfs:spec_ioctl+50 () Aug 25 17:01:50 ldasdata6 genunix: [ID 655072 kern.notice] fffffe8001112de0 genunix:fop_ioc...
2011 May 03
4
multipl disk failures cause zpool hang
Hi, There seems to be a few threads about zpool hang, do we have a workaround to resolve the hang issue without rebooting ? In my case, I have a pool with disks from external LUNs via a fiber cable. When the cable is unplugged while there is IO in the pool, All zpool related command hang (zpool status, zpool list, etc.), put the cable back does not solve the problem. Eventually, I
2007 Oct 12
0
zfs: allocating allocated segment(offset=77984887808 size=66560)
...:zio_free_blk+2d (82a5b980, 824aace0,) 824aacb4 zfs:zil_free_log_block+20 (c314f440, 824aace0,) 824aad90 zfs:zil_parse+1aa (c314f440, f4974768,) 824aaddc zfs:zil_destroy+dd (c314f440, 0) 824aae00 zfs:dmu_objset_destroy+35 (8e6ef000) 824aae18 zfs:zfs_ioc_destroy+41 (8e6ef000, 5a18, 3, ) 824aae40 zfs:zfsdev_ioctl+d8 (2d80000, 5a18, 8046) 824aae6c genunix:cdev_ioctl+2e (2d80000, 5a18, 8046) 824aae94 specfs:spec_ioctl+65 (8773eb40, 5a18, 804) 824aaed4 genunix:fop_ioctl+46 (8773eb40, 5a18, 804) 824aaf84 genunix:ioctl+151 (3, 5a18, 8046ab8, 8) on reboot I then finished the zfs destroy -r z/snv_68 and zfs crea...
2010 Jun 25
11
Maximum zfs send/receive throughput
It seems we are hitting a boundary with zfs send/receive over a network link (10Gb/s). We can see peak values of up to 150MB/s, but on average about 40-50MB/s are replicated. This is far away from the bandwidth that a 10Gb link can offer. Is it possible, that ZFS is giving replication a too low priority/throttling it too much?
2011 Jan 18
4
Zpool Import Hanging
...930 dmu_objset_find_spa+0x153() ffffff0008f2b970 dmu_objset_find+0x40() ffffff0008f2ba40 spa_load_impl+0xb23() ffffff0008f2bad0 spa_load+0x117() ffffff0008f2bb50 spa_load_best+0x78() ffffff0008f2bbf0 spa_import+0xee() ffffff0008f2bc40 zfs_ioc_pool_import+0xc0() ffffff0008f2bcc0 zfsdev_ioctl+0x177() ffffff0008f2bd00 cdev_ioctl+0x45() ffffff0008f2bd40 spec_ioctl+0x5a() ffffff0008f2bdc0 fop_ioctl+0x7b() ffffff0008f2bec0 ioctl+0x18e() ffffff0008f2bf10 sys_syscall32+0xff() I have this in a loop running every 15 secs, and I?ll occasionally see some ddt_* lines as well (cur...
2008 Apr 28
3
[Bug 1657] New: tests/functional/acl/nontrivial/ zfs_acl_cp_001_pos causes panic
...00f8ad900 unix:die+f4 () ffffff000f8ada30 unix:trap+37e () ffffff000f8ada40 unix:cmntrap+1d0 () ffffff000f8adb40 zfs:dsl_dataset_get_spa+f () ffffff000f8adbb0 zfs:dsl_dataset_destroy+25a () ffffff000f8adbf0 zfs:dmu_objset_destroy+5e () ffffff000f8adc20 zfs:zfs_ioc_destroy+42 () ffffff000f8adca0 zfs:zfsdev_ioctl+162 () ffffff000f8adce0 genunix:cdev_ioctl+48 () ffffff000f8add20 specfs:spec_ioctl+86 () ffffff000f8adda0 genunix:fop_ioctl+7b () ffffff000f8adeb0 genunix:ioctl+174 () ffffff000f8adf00 unix:brand_sys_syscall32+292 () Test journals: Test_Case_Start| 116843 tests/functional/acl/trivial/zfs_acl_comp...
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool. root@:/$ zpool status panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2009 Apr 01
4
ZFS Locking Up periodically
I''ve recently re-installed an X4500 running Nevada b109 and have been experiencing ZFS lock ups regularly (perhaps once every 2-3 days). The machine is a backup server and receives hourly ZFS snapshots from another thumper - as such, the amount of zfs activity tends to be reasonably high. After about 48 - 72 hours, the file system seems to lock up and I''m unable to do anything
2010 Jun 28
23
zpool import hangs indefinitely (retry post in parts; too long?)
Now at 36 hours since zdb process start and: PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209 Idling at 0.2% processor for nearly the past 24 hours... feels very stuck. Thoughts on how to determine where and why? -- This message posted from opensolaris.org
2006 Jul 30
6
zfs mount stuck in zil_replay
Hello ZFS, System was rebooted and after reboot server again System is snv_39, SPARC, T2000 bash-3.00# ptree 7 /lib/svc/bin/svc.startd -s 163 /sbin/sh /lib/svc/method/fs-local 254 /usr/sbin/zfs mount -a [...] bash-3.00# zfs list|wc -l 46 Using df I can see most file systems are already mounted. > ::ps!grep zfs R 254 163 7 7 0 0x4a004000
2009 Feb 24
44
Motherboard for home zfs/solaris file server
Hello, I am building a home file server and am looking for an ATX mother board that will be supported well with OpenSolaris (onboard SATA controller, network, graphics if any, audio, etc). I decided to go for Intel based boards (socket LGA 775) since it seems like power management is better supported with Intel processors and power efficiency is an important factor. After reading several