Displaying 11 results from an estimated 11 matches for "zio_wait".
Did you mean:
io_wait
2011 May 03
4
multipl disk failures cause zpool hang
Hi,
There seems to be a few threads about zpool hang, do we have a
workaround to resolve the hang issue without rebooting ?
In my case, I have a pool with disks from external LUNs via a fiber
cable. When the cable is unplugged while there is IO in the pool,
All zpool related command hang (zpool status, zpool list, etc.), put the
cable back does not solve the problem.
Eventually, I
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
...xt_stage+72 ()
Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047e490 zfs:zio_gang_pipeline+1e ()
Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047e4a0 zfs:zio_next_stage+72 ()
Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047e4d0 zfs:zio_wait_for_children+49 ()
Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047e4e0 zfs:zio_wait_children_ready+15 ()
Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047e4f0 zfs:zfsctl_ops_root+2f9f41e6 ()
Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fff...
2007 Sep 18
5
ZFS panic in space_map.c line 125
...3()
space_map_remove+0x239()
space_map_load+0x17d()
metaslab_activate+0x6f()
metaslab_group_alloc+0x187()
metaslab_alloc_dva+0xab()
metaslab_alloc+0x51()
zio_dva_allocate+0x3f()
zio_next_stage+0x72()
zio_checksum_generate+0x5f()
zio_next_stage+0x72()
zio_write_compress+0x136()
zio_next_stage+0x72()
zio_wait_for_children+0x49()
zio_wait_children_ready+0x15()
zio_next_stage_async+0xae()
zio_wait+0x2d()
arc_write+0xcc()
dmu_objset_sync+0x141()
dsl_dataset_sync+0x23()
dsl_pool_sync+0x7b()
spa_sync+0x116()
txg_sync_thread+0x115()
thread_start+8()
It appears ZFS is still able to read the labels from the dr...
2008 Apr 24
0
panic on zfs scrub on builds 79 & 86
...940 unix:mutex_enter+b ()
genunix: [ID 655072 kern.notice] ffffff0010091960 zfs:zio_buf_alloc+25 ()
genunix: [ID 655072 kern.notice] ffffff00100919a0 zfs:zio_read_init+49 ()
genunix: [ID 655072 kern.notice] ffffff00100919d0 zfs:zio_execute+7f ()
genunix: [ID 655072 kern.notice] ffffff0010091a10 zfs:zio_wait+2e ()
genunix: [ID 655072 kern.notice] ffffff0010091a60 zfs:traverse_read+19f ()
genunix: [ID 655072 kern.notice] ffffff0010091b00 zfs:find_block+15b ()
genunix: [ID 655072 kern.notice] ffffff0010091b90 zfs:traverse_segment+233 ()
genunix: [ID 655072 kern.notice] ffffff0010091be0 zfs:traverse_more+...
2007 Nov 25
2
Corrupted pool
...04a1630
zfs:zio_next_stage+72 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1640
zfs:zio_gang_pipeline+1e ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1650
zfs:zio_next_stage+72 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1680
zfs:zio_wait_for_children+49 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1690
zfs:zio_wait_children_ready+15 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a16a0
zfs:zfsctl_ops_root+2f96de26 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a16d0
zfs:...
2010 Aug 30
5
pool died during scrub
...; due to a NULL pointer dereference
dump content: kernel pages only
> $C
fffffe80007cb960 vdev_is_dead+2()
fffffe80007cb9a0 vdev_mirror_child_select+0x65()
fffffe80007cba00 vdev_mirror_io_start+0x44()
fffffe80007cba30 zio_vdev_io_start+0x159()
fffffe80007cba60 zio_execute+0x6f()
fffffe80007cba90 zio_wait+0x2d()
fffffe80007cbb40 arc_read_nolock+0x668()
fffffe80007cbbd0 dmu_objset_open_impl+0xcf()
fffffe80007cbc20 dsl_pool_open+0x4e()
fffffe80007cbcc0 spa_load+0x307()
fffffe80007cbd00 spa_open_common+0xf7()
fffffe80007cbd10 spa_open+0xb()
fffffe80007cbd30 pool_status_check+0x19()
fffffe80007cbd80 zfs...
2008 Dec 28
2
zfs mount hangs
...42:12 base genunix: [ID 179002 kern.notice] %l0-3:
0000000000c44002 00000000018d0e58 0000000000000001 0000000000c44002
Dec 27 04:42:12 base %l4-7: 0000000000000000 0000000000000001
0000000000000002 0000000001326e5c
Dec 27 04:42:12 base genunix: [ID 723222 kern.notice] 000002a10433f6c0
zfs:zio_wait+30 (3001365b778, 6001cdcf7e8, 3001365ba18, 3001365ba10,
30034dc1f48, 1)
Dec 27 04:42:12 base genunix: [ID 179002 kern.notice] %l0-3:
000006001cdcf7f0 000000000000ffff 0000000000000100 000000000000fc00
Dec 27 04:42:12 base %l4-7: 00000000018d7000 000000000c6eefd9
000000000c6eefd8 000000000...
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool.
root@:/$ zpool status
panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2007 Feb 12
17
NFS/ZFS performance problems - txg_wait_open() deadlocks?
...64 | 0
txg_count: 804
total time in seconds: 179
^C
We''re stuck in txg_wait_open() even for over 64 seconds! with hundreds of threads waiting for the same transaction. Why is it? See also in pstack for nfsd I attached that no thread is in zio_wait() so we''re not waiting for disks.
bash-3.00# cat zfs_check.d
#!/usr/sbin/dtrace -s
fbt::zfs_range_lock:entry
/execname=="nfsd"/
{
self->t=timestamp;
}
fbt::zfs_range_lock:return
/self->t/
{
@range=quantize((timestamp-self->t)/1000000);
}
fbt::txg_wait_open...
2007 Sep 19
3
ZFS panic when trying to import pool
...a80, ffffff008f235730, 200, ffffff008fbeaa80, 2
, 3fd0f1)
zio_dva_allocate+0x68(ffffff008f722790)
zio_next_stage+0xb3(ffffff008f722790)
zio_checksum_generate+0x6e(ffffff008f722790)
zio_next_stage+0xb3(ffffff008f722790)
zio_write_compress+0x239(ffffff008f722790)
zio_next_stage+0xb3(ffffff008f722790)
zio_wait_for_children+0x5d(ffffff008f722790, 1, ffffff008f7229e0)
zio_wait_children_ready+0x20(ffffff008f722790)
zio_next_stage_async+0xbb(ffffff008f722790)
zio_nowait+0x11(ffffff008f722790)
dmu_objset_sync+0x196(ffffff008e4e5000, ffffff008f722a10, ffffff008f260a80)
dsl_dataset_sync+0x5d(ffffff008df47e00, f...
2010 Jun 28
23
zpool import hangs indefinitely (retry post in parts; too long?)
Now at 36 hours since zdb process start and:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209
Idling at 0.2% processor for nearly the past 24 hours... feels very stuck. Thoughts on how to determine where and why?
--
This message posted from opensolaris.org