Robert Milkowski
2007-Feb-13 13:43 UTC
[zfs-discuss] zpool export consumes whole CPU and takes more than 30 minutes to complete
Hi.
T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1.
Command ''zpool export f3-2'' is hung for 30 minutes now and
still is going.
Nothing else is running on the server. I can see one CPU being 100% in SYS like:
bash-3.00# mpstat 1
[...]
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 67 220 110 20 0 0 0 0 0 0 0 0 100
1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
2 0 0 0 2 0 2 0 0 0 0 5 0 0 0 100
3 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
4 0 0 0 6 0 10 0 0 0 0 0 0 0 0 100
5 0 0 0 6 0 10 0 0 0 0 0 0 0 0 100
6 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
7 0 0 62 8 0 14 0 0 0 0 0 0 0 0 100
8 0 0 0 6 0 10 0 0 1 0 0 0 0 0 100
9 0 0 1 7 0 12 0 0 0 0 0 0 0 0 100
10 0 0 0 8 0 14 0 0 0 0 0 0 0 0 100
11 0 0 0 4 0 6 0 0 0 0 0 0 0 0 100
12 0 0 1 21 0 39 0 0 2 0 0 0 0 0 100
13 0 0 1 5 0 8 0 0 0 0 0 0 0 0 100
14 0 0 18 42 35 12 0 0 0 0 0 0 0 0 100
15 0 0 1 5 3 2 0 0 0 0 0 0 0 0 100
16 0 0 11 11 9 6 0 0 1 0 0 0 0 0 100
17 0 0 0 7 0 12 0 0 0 0 13 0 0 0 100
18 0 0 0 6 0 10 0 0 0 0 0 0 0 0 100
19 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
20 0 0 0 5 0 8 0 0 0 0 20 0 0 0 100
21 0 0 3 6 0 10 0 0 2 0 188 0 0 0 100
22 0 0 0 2 0 2 0 0 0 0 0 0 0 0 100
23 0 0 1 5 0 8 0 0 0 0 219 0 0 0 100
24 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
25 0 0 0 1 0 0 0 0 0 0 0 0 100 0 0
26 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
27 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
28 0 0 0 5 0 8 0 0 0 0 0 0 3 0 97
29 0 0 1 4 0 6 0 0 1 0 226 0 0 0 100
30 0 0 1 2 0 2 0 0 0 0 0 0 0 0 100
31 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 61 219 108 22 0 0 2 0 0 0 1 0 99
1 0 0 0 2 0 2 0 1 0 0 1 0 0 0 100
2 0 0 0 2 0 2 0 0 0 0 5 0 0 0 100
3 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
4 0 0 0 5 0 8 0 0 0 0 0 0 0 0 100
5 0 0 0 5 0 8 0 0 0 0 0 0 0 0 100
6 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
7 0 0 3 9 0 16 0 0 4 0 0 0 0 0 100
8 0 0 0 4 0 6 0 0 0 0 0 0 0 0 100
9 0 0 0 6 0 10 0 0 0 0 0 0 0 0 100
10 0 0 1 7 0 12 0 0 0 0 0 0 0 0 100
11 0 0 0 4 0 6 0 0 0 0 0 0 0 0 100
12 0 0 2 22 0 41 0 0 1 0 0 0 0 0 100
13 0 0 0 6 0 10 0 0 0 0 0 0 0 0 100
14 0 0 20 46 38 14 0 0 1 0 0 0 0 0 100
15 0 0 0 2 1 0 0 0 0 0 0 0 0 0 100
16 0 0 12 10 9 6 0 0 0 0 0 0 0 0 100
17 0 0 0 9 0 16 0 0 2 0 13 0 0 0 100
18 0 0 0 4 0 6 0 0 0 0 0 0 0 0 100
19 0 0 0 2 0 2 0 0 0 0 0 0 0 0 100
20 0 0 0 3 0 4 0 0 0 0 10 0 0 0 100
21 0 0 3 7 0 10 1 0 4 0 185 0 0 0 100
22 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
23 0 0 1 3 0 4 0 0 0 0 215 1 0 0 99
24 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
25 0 0 0 1 0 0 0 0 0 0 0 0 100 0 0
26 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
27 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
28 0 0 0 5 0 8 0 0 0 0 0 0 4 0 96
29 0 0 1 5 0 6 1 0 0 0 224 1 0 0 99
30 0 0 0 2 0 2 0 0 0 0 0 0 0 0 100
31 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 54 213 107 12 0 0 2 0 0 0 1 0 99
1 0 0 0 2 0 2 0 1 1 0 0 0 0 0 100
2 0 0 0 3 0 4 0 1 0 0 5 0 0 0 100
3 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
4 0 0 0 9 0 16 0 1 0 0 0 0 0 0 100
5 0 0 0 5 0 8 0 0 0 0 0 0 0 0 100
6 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
7 0 0 1 7 0 12 0 0 1 0 0 0 0 0 100
8 0 0 0 6 0 10 0 0 0 0 0 0 0 0 100
9 0 0 0 4 0 6 0 0 0 0 0 0 0 0 100
10 0 0 0 3 0 4 0 0 0 0 0 0 0 0 100
11 0 0 0 8 0 14 0 1 0 0 0 0 0 0 100
12 0 0 3 19 0 35 0 0 1 0 0 0 0 0 100
13 0 0 0 4 0 6 0 0 0 0 0 0 0 0 100
14 0 0 17 38 33 8 0 0 0 0 0 0 0 0 100
15 0 0 1 5 3 2 0 0 0 0 0 0 0 0 100
16 0 0 12 11 9 6 0 0 0 0 0 0 0 0 100
17 0 0 0 4 0 6 0 0 0 0 13 0 0 0 100
18 0 0 0 6 0 10 0 0 0 0 0 0 0 0 100
19 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
20 0 0 0 3 0 4 0 0 0 0 10 0 0 0 100
21 0 0 4 7 0 12 0 0 2 0 196 0 0 0 100
22 0 0 0 2 0 2 0 0 0 0 0 0 0 0 100
23 0 0 1 5 0 8 0 0 0 0 217 0 1 0 99
24 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
25 0 0 0 1 0 0 0 0 0 0 0 0 100 0 0
26 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
27 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
28 0 0 0 5 0 8 0 0 0 0 0 0 5 0 95
29 0 0 1 4 0 6 0 0 1 0 224 1 0 0 99
30 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
31 0 0 1 2 0 2 0 0 0 0 0 0 0 0 100
^C
bash-3.00# dtrace -n profile-1007''{@[stack()]=count();}'' -n
tick-10s''{trunc(@,20);printa(@);exit(0);}''
dtrace: description ''profile-1007'' matched 1 probe
dtrace: description ''tick-10s'' matched 1 probe
CPU ID FUNCTION:NAME
8 49714 :tick-10s
unix`xc_one+0x268
dtrace`dtrace_ioctl+0xe6c
genunix`fop_ioctl+0x20
genunix`ioctl+0x184
unix`syscall_trap+0xac
55
unix`xc_one+0x260
dtrace`dtrace_ioctl+0xe6c
genunix`fop_ioctl+0x20
genunix`ioctl+0x184
unix`syscall_trap+0xac
57
genunix`fsflush_do_pages+0x1f4
genunix`fsflush+0x3e0
unix`thread_start+0x4
62
unix`utl0+0x4c
68
genunix`anon_map_getpages+0x28c
genunix`segvn_fault_anonpages+0x310
genunix`segvn_fault+0x438
genunix`as_fault+0x4c8
unix`pagefault+0x68
unix`trap+0xd4c
unix`utl0+0x4c
71
genunix`avl_walk+0x20
zfs`metaslab_ff_alloc+0x9c
zfs`space_map_alloc+0x10
zfs`metaslab_group_alloc+0x1e0
zfs`metaslab_alloc_dva+0x114
zfs`metaslab_alloc+0x2c
zfs`zio_dva_allocate+0x4c
zfs`zio_write_compress+0x1ec
zfs`arc_write+0xe4
zfs`dbuf_sync+0x6c0
zfs`dnode_sync+0x35c
zfs`dmu_objset_sync_dnodes+0x6c
zfs`dmu_objset_sync+0x54
zfs`dsl_dataset_sync+0xc
zfs`dsl_pool_sync+0x64
zfs`spa_sync+0x1b0
zfs`txg_sync_thread+0x134
unix`thread_start+0x4
82
genunix`avl_walk+0xa0
zfs`metaslab_ff_alloc+0x9c
zfs`space_map_alloc+0x10
zfs`metaslab_group_alloc+0x1e0
zfs`metaslab_alloc_dva+0x114
zfs`metaslab_alloc+0x2c
zfs`zio_dva_allocate+0x4c
zfs`zio_write_compress+0x1ec
zfs`arc_write+0xe4
zfs`dbuf_sync+0x6c0
zfs`dnode_sync+0x35c
zfs`dmu_objset_sync_dnodes+0x6c
zfs`dmu_objset_sync+0x54
zfs`dsl_dataset_sync+0xc
zfs`dsl_pool_sync+0x64
zfs`spa_sync+0x1b0
zfs`txg_sync_thread+0x134
unix`thread_start+0x4
86
genunix`avl_walk+0x60
zfs`metaslab_ff_alloc+0x9c
zfs`space_map_alloc+0x10
zfs`metaslab_group_alloc+0x1e0
zfs`metaslab_alloc_dva+0x114
zfs`metaslab_alloc+0x2c
zfs`zio_write_allocate_gang_members+0x18c
zfs`zio_write_compress+0x1ec
zfs`arc_write+0xe4
zfs`dbuf_sync+0x6c0
zfs`dnode_sync+0x35c
zfs`dmu_objset_sync_dnodes+0x6c
zfs`dmu_objset_sync+0x54
zfs`dsl_dataset_sync+0xc
zfs`dsl_pool_sync+0x64
zfs`spa_sync+0x1b0
zfs`txg_sync_thread+0x134
unix`thread_start+0x4
87
zfs`metaslab_ff_alloc+0xb0
zfs`space_map_alloc+0x10
zfs`metaslab_group_alloc+0x1e0
zfs`metaslab_alloc_dva+0x114
zfs`metaslab_alloc+0x2c
zfs`zio_dva_allocate+0x4c
zfs`zio_write_compress+0x1ec
zfs`arc_write+0xe4
zfs`dbuf_sync+0x6c0
zfs`dnode_sync+0x35c
zfs`dmu_objset_sync_dnodes+0x6c
zfs`dmu_objset_sync+0x54
zfs`dsl_dataset_sync+0xc
zfs`dsl_pool_sync+0x64
zfs`spa_sync+0x1b0
zfs`txg_sync_thread+0x134
unix`thread_start+0x4
92
genunix`avl_walk+0x44
zfs`metaslab_ff_alloc+0x9c
zfs`space_map_alloc+0x10
zfs`metaslab_group_alloc+0x1e0
zfs`metaslab_alloc_dva+0x114
zfs`metaslab_alloc+0x2c
zfs`zio_write_allocate_gang_members+0x18c
zfs`zio_write_compress+0x1ec
zfs`arc_write+0xe4
zfs`dbuf_sync+0x6c0
zfs`dnode_sync+0x35c
zfs`dmu_objset_sync_dnodes+0x6c
zfs`dmu_objset_sync+0x54
zfs`dsl_dataset_sync+0xc
zfs`dsl_pool_sync+0x64
zfs`spa_sync+0x1b0
zfs`txg_sync_thread+0x134
unix`thread_start+0x4
97
genunix`avl_walk+0x60
zfs`metaslab_ff_alloc+0x9c
zfs`metaslab_ff_alloc+0xe8
zfs`space_map_alloc+0x10
zfs`metaslab_group_alloc+0x1e0
zfs`metaslab_alloc_dva+0x114
zfs`metaslab_alloc+0x2c
zfs`zio_write_allocate_gang_members+0x18c
zfs`zio_write_compress+0x1ec
zfs`arc_write+0xe4
zfs`dbuf_sync+0x6c0
zfs`dnode_sync+0x35c
zfs`dmu_objset_sync_dnodes+0x6c
zfs`dmu_objset_sync+0x54
zfs`dsl_dataset_sync+0xc
zfs`dsl_pool_sync+0x64
zfs`spa_sync+0x1b0
zfs`txg_sync_thread+0x134
unix`thread_start+0x4
101
133
genunix`avl_walk+0x24
zfs`metaslab_ff_alloc+0x9c
zfs`space_map_alloc+0x10
zfs`metaslab_group_alloc+0x1e0
zfs`metaslab_alloc_dva+0x114
zfs`metaslab_alloc+0x2c
zfs`zio_dva_allocate+0x4c
zfs`zio_write_compress+0x1ec
zfs`arc_write+0xe4
zfs`dbuf_sync+0x6c0
zfs`dnode_sync+0x35c
zfs`dmu_objset_sync_dnodes+0x6c
zfs`dmu_objset_sync+0x54
zfs`dsl_dataset_sync+0xc
zfs`dsl_pool_sync+0x64
zfs`spa_sync+0x1b0
zfs`txg_sync_thread+0x134
unix`thread_start+0x4
156
zfs`metaslab_ff_alloc+0x84
zfs`space_map_alloc+0x10
zfs`metaslab_group_alloc+0x1e0
zfs`metaslab_alloc_dva+0x114
zfs`metaslab_alloc+0x2c
zfs`zio_dva_allocate+0x4c
zfs`zio_write_compress+0x1ec
zfs`arc_write+0xe4
zfs`dbuf_sync+0x6c0
zfs`dnode_sync+0x35c
zfs`dmu_objset_sync_dnodes+0x6c
zfs`dmu_objset_sync+0x54
zfs`dsl_dataset_sync+0xc
zfs`dsl_pool_sync+0x64
zfs`spa_sync+0x1b0
zfs`txg_sync_thread+0x134
unix`thread_start+0x4
256
genunix`avl_walk+0x64
zfs`metaslab_ff_alloc+0x9c
zfs`space_map_alloc+0x10
zfs`metaslab_group_alloc+0x1e0
zfs`metaslab_alloc_dva+0x114
zfs`metaslab_alloc+0x2c
zfs`zio_dva_allocate+0x4c
zfs`zio_write_compress+0x1ec
zfs`arc_write+0xe4
zfs`dbuf_sync+0x6c0
zfs`dnode_sync+0x35c
zfs`dmu_objset_sync_dnodes+0x6c
zfs`dmu_objset_sync+0x54
zfs`dsl_dataset_sync+0xc
zfs`dsl_pool_sync+0x64
zfs`spa_sync+0x1b0
zfs`txg_sync_thread+0x134
unix`thread_start+0x4
689
zfs`metaslab_ff_alloc+0x80
zfs`space_map_alloc+0x10
zfs`metaslab_group_alloc+0x1e0
zfs`metaslab_alloc_dva+0x114
zfs`metaslab_alloc+0x2c
zfs`zio_dva_allocate+0x4c
zfs`zio_write_compress+0x1ec
zfs`arc_write+0xe4
zfs`dbuf_sync+0x6c0
zfs`dnode_sync+0x35c
zfs`dmu_objset_sync_dnodes+0x6c
zfs`dmu_objset_sync+0x54
zfs`dsl_dataset_sync+0xc
zfs`dsl_pool_sync+0x64
zfs`spa_sync+0x1b0
zfs`txg_sync_thread+0x134
unix`thread_start+0x4
700
genunix`avl_walk+0x48
zfs`metaslab_ff_alloc+0x9c
zfs`space_map_alloc+0x10
zfs`metaslab_group_alloc+0x1e0
zfs`metaslab_alloc_dva+0x114
zfs`metaslab_alloc+0x2c
zfs`zio_dva_allocate+0x4c
zfs`zio_write_compress+0x1ec
zfs`arc_write+0xe4
zfs`dbuf_sync+0x6c0
zfs`dnode_sync+0x35c
zfs`dmu_objset_sync_dnodes+0x6c
zfs`dmu_objset_sync+0x54
zfs`dsl_dataset_sync+0xc
zfs`dsl_pool_sync+0x64
zfs`spa_sync+0x1b0
zfs`txg_sync_thread+0x134
unix`thread_start+0x4
750
genunix`avl_walk+0x60
zfs`metaslab_ff_alloc+0x9c
zfs`space_map_alloc+0x10
zfs`metaslab_group_alloc+0x1e0
zfs`metaslab_alloc_dva+0x114
zfs`metaslab_alloc+0x2c
zfs`zio_dva_allocate+0x4c
zfs`zio_write_compress+0x1ec
zfs`arc_write+0xe4
zfs`dbuf_sync+0x6c0
zfs`dnode_sync+0x35c
zfs`dmu_objset_sync_dnodes+0x6c
zfs`dmu_objset_sync+0x54
zfs`dsl_dataset_sync+0xc
zfs`dsl_pool_sync+0x64
zfs`spa_sync+0x1b0
zfs`txg_sync_thread+0x134
unix`thread_start+0x4
2589
genunix`avl_walk+0x44
zfs`metaslab_ff_alloc+0x9c
zfs`space_map_alloc+0x10
zfs`metaslab_group_alloc+0x1e0
zfs`metaslab_alloc_dva+0x114
zfs`metaslab_alloc+0x2c
zfs`zio_dva_allocate+0x4c
zfs`zio_write_compress+0x1ec
zfs`arc_write+0xe4
zfs`dbuf_sync+0x6c0
zfs`dnode_sync+0x35c
zfs`dmu_objset_sync_dnodes+0x6c
zfs`dmu_objset_sync+0x54
zfs`dsl_dataset_sync+0xc
zfs`dsl_pool_sync+0x64
zfs`spa_sync+0x1b0
zfs`txg_sync_thread+0x134
unix`thread_start+0x4
2608
unix`cpu_halt+0xc8
unix`idle+0x128
unix`thread_start+0x4
287026
bash-3.00#
bash-3.00# dtrace -n fbt:::entry''{self->vt=vtimestamp;}'' -n
fbt:::return''/self->vt/{@[probemod,probefunc]=sum(vtimestamp-self->vt);}''
-n tick-10s''{trunc(@,20);printa(@);exit(0);}''
dtrace: description ''fbt:::entry'' matched 24134 probes
dtrace: description ''fbt:::return'' matched 24091 probes
dtrace: description ''tick-10s'' matched 1 probe
CPU ID FUNCTION:NAME
18 49714 :tick-10s
unix disp_anywork
12085227
SUNW,UltraSPARC-T1 bcopy
12632788
unix kstat_hold
13037942
genunix cv_wait
13654225
genunix p_online
16763096
genunix syscall_mstate
17423367
unix disp_getwork
17982885
genunix ioctl
19162882
unix kstat_hold_bykid
21893794
unix putnext
23075984
e1000g e1000g_intr_pciexpress
25999320
unix i_ddi_trigger_softint
27343869
genunix ddi_intr_trigger_softint
27398272
unix setsoftint
27842783
unix page_next_scan_large
36041435
unix resume
37236734
unix mutex_vector_enter
44091130
genunix fop_ioctl
51053509
genunix avl_walk
169227846
unix cpu_halt
192394373362
bash-3.00#
avl_walk again.....
bash-3.00# zpool status
pool: f3-2
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using ''zpool upgrade''. Once this is
done, the
pool will no longer be accessible on older software versions.
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
f3-2 ONLINE 0 0 0
c2t42d0 ONLINE 0 0 0
c2t42d1 ONLINE 0 0 0
c2t42d2 ONLINE 0 0 0
errors: No known data errors
bash-3.00# iostat -xnz 1
[...]
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 16.2 0.0 35.4 0.0 0.1 0.0 3.7 0 2 c2t42d2
0.0 31.3 0.0 634.3 0.0 0.1 0.0 4.1 0 3 c2t42d1
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 3.0 0.0 1.5 0.0 0.0 0.0 1.7 0 1 c2t42d2
0.0 9.0 0.0 385.7 0.0 0.0 0.0 3.9 0 2 c2t42d1
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 25.0 0.0 230.5 0.0 0.1 0.0 3.8 0 3 c2t42d2
0.0 34.0 0.0 632.0 0.0 0.1 0.0 4.1 0 3 c2t42d1
0.0 5.0 0.0 5.0 0.0 0.0 0.0 2.5 0 1 c2t42d0
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 9.0 0.0 385.5 0.0 0.0 0.0 3.9 0 2 c2t42d2
0.0 3.0 0.0 1.5 0.0 0.0 0.0 1.7 0 1 c2t42d0
^C
bash-3.00#
Not really much is happening here - looks like CPU is the limiting factor.
bash-3.00# mdb -k
Loading modules: [ unix krtld genunix specfs dtrace ufs sd px md ip sctp usba
fcp fctl qlc crypto lofs zfs random nfs ptm ssd logindmux cpc fcip
]> ::ps!grep zpool
R 17472 15928 17472 15928 0 0x4a004000 00000303b55647f8
zpool> 00000303b55647f8::walk thread|::findstack -v
stack pointer for thread 307939f6fc0: 2a102d5cf41
[ 000002a102d5cf41 cv_wait+0x38() ]
000002a102d5cff1 txg_wait_synced+0x54(6000c977d90, 337e86, 337e84,
6000c977dd0, 6000c977dd2, 6000c977d88)
000002a102d5d0a1 zfs_sync+0xb4(6000ca16d00, 0, 6000ca00440, 6000ca00494, 0, 0)
000002a102d5d151 dounmount+0x2c(6000ca16d00, 0, 30268227998, 300150ac308,
6001210fcc0, 3000650b6)
000002a102d5d201 umount2+0x140(4e7952, 0, 0, 47878, 0, 0)
000002a102d5d2e1 syscall_trap32+0xcc(4e7952, 0, 0, 47878, 1ac80,
ff396000)>
To me it looks like there''re many outstanding transaction in a memory
which zfs tries to execute (put to disks) but just traversing over some kind of
in-memory meta data consumes entire one CPU which is the limiting factor here
and at the end it can''t even saturate disks (well it''s so
sloooooow that it wouldn''t saturate a floppy disk).
Why only one CPU? Why CPU is needed at all (at such quantities) to just flush
caches?
This message posted from opensolaris.org
Mark Maybee
2007-Feb-13 14:54 UTC
[zfs-discuss] zpool export consumes whole CPU and takes more than 30 minutes to complete
Robert, This doesn''t look like cache flushing, rather it looks like we are trying to finish up some writes... but are having a hard time allocating space for them. Is this pool almost 100% full? There are lots of instances of zio_write_allocate_gang_members(), which indicates a very high degree of fragmentation in the pool. -Mark Robert Milkowski wrote:> Hi. > > T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1. > > Command ''zpool export f3-2'' is hung for 30 minutes now and still is going. > Nothing else is running on the server. I can see one CPU being 100% in SYS like: > > bash-3.00# mpstat 1 > [...] > CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl > 0 0 0 67 220 110 20 0 0 0 0 0 0 0 0 100 > 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 2 0 0 0 2 0 2 0 0 0 0 5 0 0 0 100 > 3 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 4 0 0 0 6 0 10 0 0 0 0 0 0 0 0 100 > 5 0 0 0 6 0 10 0 0 0 0 0 0 0 0 100 > 6 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 7 0 0 62 8 0 14 0 0 0 0 0 0 0 0 100 > 8 0 0 0 6 0 10 0 0 1 0 0 0 0 0 100 > 9 0 0 1 7 0 12 0 0 0 0 0 0 0 0 100 > 10 0 0 0 8 0 14 0 0 0 0 0 0 0 0 100 > 11 0 0 0 4 0 6 0 0 0 0 0 0 0 0 100 > 12 0 0 1 21 0 39 0 0 2 0 0 0 0 0 100 > 13 0 0 1 5 0 8 0 0 0 0 0 0 0 0 100 > 14 0 0 18 42 35 12 0 0 0 0 0 0 0 0 100 > 15 0 0 1 5 3 2 0 0 0 0 0 0 0 0 100 > 16 0 0 11 11 9 6 0 0 1 0 0 0 0 0 100 > 17 0 0 0 7 0 12 0 0 0 0 13 0 0 0 100 > 18 0 0 0 6 0 10 0 0 0 0 0 0 0 0 100 > 19 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 20 0 0 0 5 0 8 0 0 0 0 20 0 0 0 100 > 21 0 0 3 6 0 10 0 0 2 0 188 0 0 0 100 > 22 0 0 0 2 0 2 0 0 0 0 0 0 0 0 100 > 23 0 0 1 5 0 8 0 0 0 0 219 0 0 0 100 > 24 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 25 0 0 0 1 0 0 0 0 0 0 0 0 100 0 0 > 26 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 27 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 28 0 0 0 5 0 8 0 0 0 0 0 0 3 0 97 > 29 0 0 1 4 0 6 0 0 1 0 226 0 0 0 100 > 30 0 0 1 2 0 2 0 0 0 0 0 0 0 0 100 > 31 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl > 0 0 0 61 219 108 22 0 0 2 0 0 0 1 0 99 > 1 0 0 0 2 0 2 0 1 0 0 1 0 0 0 100 > 2 0 0 0 2 0 2 0 0 0 0 5 0 0 0 100 > 3 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 4 0 0 0 5 0 8 0 0 0 0 0 0 0 0 100 > 5 0 0 0 5 0 8 0 0 0 0 0 0 0 0 100 > 6 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 7 0 0 3 9 0 16 0 0 4 0 0 0 0 0 100 > 8 0 0 0 4 0 6 0 0 0 0 0 0 0 0 100 > 9 0 0 0 6 0 10 0 0 0 0 0 0 0 0 100 > 10 0 0 1 7 0 12 0 0 0 0 0 0 0 0 100 > 11 0 0 0 4 0 6 0 0 0 0 0 0 0 0 100 > 12 0 0 2 22 0 41 0 0 1 0 0 0 0 0 100 > 13 0 0 0 6 0 10 0 0 0 0 0 0 0 0 100 > 14 0 0 20 46 38 14 0 0 1 0 0 0 0 0 100 > 15 0 0 0 2 1 0 0 0 0 0 0 0 0 0 100 > 16 0 0 12 10 9 6 0 0 0 0 0 0 0 0 100 > 17 0 0 0 9 0 16 0 0 2 0 13 0 0 0 100 > 18 0 0 0 4 0 6 0 0 0 0 0 0 0 0 100 > 19 0 0 0 2 0 2 0 0 0 0 0 0 0 0 100 > 20 0 0 0 3 0 4 0 0 0 0 10 0 0 0 100 > 21 0 0 3 7 0 10 1 0 4 0 185 0 0 0 100 > 22 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 23 0 0 1 3 0 4 0 0 0 0 215 1 0 0 99 > 24 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 25 0 0 0 1 0 0 0 0 0 0 0 0 100 0 0 > 26 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 27 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 28 0 0 0 5 0 8 0 0 0 0 0 0 4 0 96 > 29 0 0 1 5 0 6 1 0 0 0 224 1 0 0 99 > 30 0 0 0 2 0 2 0 0 0 0 0 0 0 0 100 > 31 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl > 0 0 0 54 213 107 12 0 0 2 0 0 0 1 0 99 > 1 0 0 0 2 0 2 0 1 1 0 0 0 0 0 100 > 2 0 0 0 3 0 4 0 1 0 0 5 0 0 0 100 > 3 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 4 0 0 0 9 0 16 0 1 0 0 0 0 0 0 100 > 5 0 0 0 5 0 8 0 0 0 0 0 0 0 0 100 > 6 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 7 0 0 1 7 0 12 0 0 1 0 0 0 0 0 100 > 8 0 0 0 6 0 10 0 0 0 0 0 0 0 0 100 > 9 0 0 0 4 0 6 0 0 0 0 0 0 0 0 100 > 10 0 0 0 3 0 4 0 0 0 0 0 0 0 0 100 > 11 0 0 0 8 0 14 0 1 0 0 0 0 0 0 100 > 12 0 0 3 19 0 35 0 0 1 0 0 0 0 0 100 > 13 0 0 0 4 0 6 0 0 0 0 0 0 0 0 100 > 14 0 0 17 38 33 8 0 0 0 0 0 0 0 0 100 > 15 0 0 1 5 3 2 0 0 0 0 0 0 0 0 100 > 16 0 0 12 11 9 6 0 0 0 0 0 0 0 0 100 > 17 0 0 0 4 0 6 0 0 0 0 13 0 0 0 100 > 18 0 0 0 6 0 10 0 0 0 0 0 0 0 0 100 > 19 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 20 0 0 0 3 0 4 0 0 0 0 10 0 0 0 100 > 21 0 0 4 7 0 12 0 0 2 0 196 0 0 0 100 > 22 0 0 0 2 0 2 0 0 0 0 0 0 0 0 100 > 23 0 0 1 5 0 8 0 0 0 0 217 0 1 0 99 > 24 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 25 0 0 0 1 0 0 0 0 0 0 0 0 100 0 0 > 26 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 27 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 28 0 0 0 5 0 8 0 0 0 0 0 0 5 0 95 > 29 0 0 1 4 0 6 0 0 1 0 224 1 0 0 99 > 30 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100 > 31 0 0 1 2 0 2 0 0 0 0 0 0 0 0 100 > ^C > > > bash-3.00# dtrace -n profile-1007''{@[stack()]=count();}'' -n tick-10s''{trunc(@,20);printa(@);exit(0);}'' > dtrace: description ''profile-1007'' matched 1 probe > dtrace: description ''tick-10s'' matched 1 probe > CPU ID FUNCTION:NAME > 8 49714 :tick-10s > > unix`xc_one+0x268 > dtrace`dtrace_ioctl+0xe6c > genunix`fop_ioctl+0x20 > genunix`ioctl+0x184 > unix`syscall_trap+0xac > 55 > > unix`xc_one+0x260 > dtrace`dtrace_ioctl+0xe6c > genunix`fop_ioctl+0x20 > genunix`ioctl+0x184 > unix`syscall_trap+0xac > 57 > > genunix`fsflush_do_pages+0x1f4 > genunix`fsflush+0x3e0 > unix`thread_start+0x4 > 62 > > unix`utl0+0x4c > 68 > > genunix`anon_map_getpages+0x28c > genunix`segvn_fault_anonpages+0x310 > genunix`segvn_fault+0x438 > genunix`as_fault+0x4c8 > unix`pagefault+0x68 > unix`trap+0xd4c > unix`utl0+0x4c > 71 > > genunix`avl_walk+0x20 > zfs`metaslab_ff_alloc+0x9c > zfs`space_map_alloc+0x10 > zfs`metaslab_group_alloc+0x1e0 > zfs`metaslab_alloc_dva+0x114 > zfs`metaslab_alloc+0x2c > zfs`zio_dva_allocate+0x4c > zfs`zio_write_compress+0x1ec > zfs`arc_write+0xe4 > zfs`dbuf_sync+0x6c0 > zfs`dnode_sync+0x35c > zfs`dmu_objset_sync_dnodes+0x6c > zfs`dmu_objset_sync+0x54 > zfs`dsl_dataset_sync+0xc > zfs`dsl_pool_sync+0x64 > zfs`spa_sync+0x1b0 > zfs`txg_sync_thread+0x134 > unix`thread_start+0x4 > 82 > > genunix`avl_walk+0xa0 > zfs`metaslab_ff_alloc+0x9c > zfs`space_map_alloc+0x10 > zfs`metaslab_group_alloc+0x1e0 > zfs`metaslab_alloc_dva+0x114 > zfs`metaslab_alloc+0x2c > zfs`zio_dva_allocate+0x4c > zfs`zio_write_compress+0x1ec > zfs`arc_write+0xe4 > zfs`dbuf_sync+0x6c0 > zfs`dnode_sync+0x35c > zfs`dmu_objset_sync_dnodes+0x6c > zfs`dmu_objset_sync+0x54 > zfs`dsl_dataset_sync+0xc > zfs`dsl_pool_sync+0x64 > zfs`spa_sync+0x1b0 > zfs`txg_sync_thread+0x134 > unix`thread_start+0x4 > 86 > > genunix`avl_walk+0x60 > zfs`metaslab_ff_alloc+0x9c > zfs`space_map_alloc+0x10 > zfs`metaslab_group_alloc+0x1e0 > zfs`metaslab_alloc_dva+0x114 > zfs`metaslab_alloc+0x2c > zfs`zio_write_allocate_gang_members+0x18c > zfs`zio_write_compress+0x1ec > zfs`arc_write+0xe4 > zfs`dbuf_sync+0x6c0 > zfs`dnode_sync+0x35c > zfs`dmu_objset_sync_dnodes+0x6c > zfs`dmu_objset_sync+0x54 > zfs`dsl_dataset_sync+0xc > zfs`dsl_pool_sync+0x64 > zfs`spa_sync+0x1b0 > zfs`txg_sync_thread+0x134 > unix`thread_start+0x4 > 87 > > zfs`metaslab_ff_alloc+0xb0 > zfs`space_map_alloc+0x10 > zfs`metaslab_group_alloc+0x1e0 > zfs`metaslab_alloc_dva+0x114 > zfs`metaslab_alloc+0x2c > zfs`zio_dva_allocate+0x4c > zfs`zio_write_compress+0x1ec > zfs`arc_write+0xe4 > zfs`dbuf_sync+0x6c0 > zfs`dnode_sync+0x35c > zfs`dmu_objset_sync_dnodes+0x6c > zfs`dmu_objset_sync+0x54 > zfs`dsl_dataset_sync+0xc > zfs`dsl_pool_sync+0x64 > zfs`spa_sync+0x1b0 > zfs`txg_sync_thread+0x134 > unix`thread_start+0x4 > 92 > > genunix`avl_walk+0x44 > zfs`metaslab_ff_alloc+0x9c > zfs`space_map_alloc+0x10 > zfs`metaslab_group_alloc+0x1e0 > zfs`metaslab_alloc_dva+0x114 > zfs`metaslab_alloc+0x2c > zfs`zio_write_allocate_gang_members+0x18c > zfs`zio_write_compress+0x1ec > zfs`arc_write+0xe4 > zfs`dbuf_sync+0x6c0 > zfs`dnode_sync+0x35c > zfs`dmu_objset_sync_dnodes+0x6c > zfs`dmu_objset_sync+0x54 > zfs`dsl_dataset_sync+0xc > zfs`dsl_pool_sync+0x64 > zfs`spa_sync+0x1b0 > zfs`txg_sync_thread+0x134 > unix`thread_start+0x4 > 97 > > genunix`avl_walk+0x60 > zfs`metaslab_ff_alloc+0x9c > zfs`metaslab_ff_alloc+0xe8 > zfs`space_map_alloc+0x10 > zfs`metaslab_group_alloc+0x1e0 > zfs`metaslab_alloc_dva+0x114 > zfs`metaslab_alloc+0x2c > zfs`zio_write_allocate_gang_members+0x18c > zfs`zio_write_compress+0x1ec > zfs`arc_write+0xe4 > zfs`dbuf_sync+0x6c0 > zfs`dnode_sync+0x35c > zfs`dmu_objset_sync_dnodes+0x6c > zfs`dmu_objset_sync+0x54 > zfs`dsl_dataset_sync+0xc > zfs`dsl_pool_sync+0x64 > zfs`spa_sync+0x1b0 > zfs`txg_sync_thread+0x134 > unix`thread_start+0x4 > 101 > > 133 > > genunix`avl_walk+0x24 > zfs`metaslab_ff_alloc+0x9c > zfs`space_map_alloc+0x10 > zfs`metaslab_group_alloc+0x1e0 > zfs`metaslab_alloc_dva+0x114 > zfs`metaslab_alloc+0x2c > zfs`zio_dva_allocate+0x4c > zfs`zio_write_compress+0x1ec > zfs`arc_write+0xe4 > zfs`dbuf_sync+0x6c0 > zfs`dnode_sync+0x35c > zfs`dmu_objset_sync_dnodes+0x6c > zfs`dmu_objset_sync+0x54 > zfs`dsl_dataset_sync+0xc > zfs`dsl_pool_sync+0x64 > zfs`spa_sync+0x1b0 > zfs`txg_sync_thread+0x134 > unix`thread_start+0x4 > 156 > > zfs`metaslab_ff_alloc+0x84 > zfs`space_map_alloc+0x10 > zfs`metaslab_group_alloc+0x1e0 > zfs`metaslab_alloc_dva+0x114 > zfs`metaslab_alloc+0x2c > zfs`zio_dva_allocate+0x4c > zfs`zio_write_compress+0x1ec > zfs`arc_write+0xe4 > zfs`dbuf_sync+0x6c0 > zfs`dnode_sync+0x35c > zfs`dmu_objset_sync_dnodes+0x6c > zfs`dmu_objset_sync+0x54 > zfs`dsl_dataset_sync+0xc > zfs`dsl_pool_sync+0x64 > zfs`spa_sync+0x1b0 > zfs`txg_sync_thread+0x134 > unix`thread_start+0x4 > 256 > > genunix`avl_walk+0x64 > zfs`metaslab_ff_alloc+0x9c > zfs`space_map_alloc+0x10 > zfs`metaslab_group_alloc+0x1e0 > zfs`metaslab_alloc_dva+0x114 > zfs`metaslab_alloc+0x2c > zfs`zio_dva_allocate+0x4c > zfs`zio_write_compress+0x1ec > zfs`arc_write+0xe4 > zfs`dbuf_sync+0x6c0 > zfs`dnode_sync+0x35c > zfs`dmu_objset_sync_dnodes+0x6c > zfs`dmu_objset_sync+0x54 > zfs`dsl_dataset_sync+0xc > zfs`dsl_pool_sync+0x64 > zfs`spa_sync+0x1b0 > zfs`txg_sync_thread+0x134 > unix`thread_start+0x4 > 689 > > zfs`metaslab_ff_alloc+0x80 > zfs`space_map_alloc+0x10 > zfs`metaslab_group_alloc+0x1e0 > zfs`metaslab_alloc_dva+0x114 > zfs`metaslab_alloc+0x2c > zfs`zio_dva_allocate+0x4c > zfs`zio_write_compress+0x1ec > zfs`arc_write+0xe4 > zfs`dbuf_sync+0x6c0 > zfs`dnode_sync+0x35c > zfs`dmu_objset_sync_dnodes+0x6c > zfs`dmu_objset_sync+0x54 > zfs`dsl_dataset_sync+0xc > zfs`dsl_pool_sync+0x64 > zfs`spa_sync+0x1b0 > zfs`txg_sync_thread+0x134 > unix`thread_start+0x4 > 700 > > genunix`avl_walk+0x48 > zfs`metaslab_ff_alloc+0x9c > zfs`space_map_alloc+0x10 > zfs`metaslab_group_alloc+0x1e0 > zfs`metaslab_alloc_dva+0x114 > zfs`metaslab_alloc+0x2c > zfs`zio_dva_allocate+0x4c > zfs`zio_write_compress+0x1ec > zfs`arc_write+0xe4 > zfs`dbuf_sync+0x6c0 > zfs`dnode_sync+0x35c > zfs`dmu_objset_sync_dnodes+0x6c > zfs`dmu_objset_sync+0x54 > zfs`dsl_dataset_sync+0xc > zfs`dsl_pool_sync+0x64 > zfs`spa_sync+0x1b0 > zfs`txg_sync_thread+0x134 > unix`thread_start+0x4 > 750 > > genunix`avl_walk+0x60 > zfs`metaslab_ff_alloc+0x9c > zfs`space_map_alloc+0x10 > zfs`metaslab_group_alloc+0x1e0 > zfs`metaslab_alloc_dva+0x114 > zfs`metaslab_alloc+0x2c > zfs`zio_dva_allocate+0x4c > zfs`zio_write_compress+0x1ec > zfs`arc_write+0xe4 > zfs`dbuf_sync+0x6c0 > zfs`dnode_sync+0x35c > zfs`dmu_objset_sync_dnodes+0x6c > zfs`dmu_objset_sync+0x54 > zfs`dsl_dataset_sync+0xc > zfs`dsl_pool_sync+0x64 > zfs`spa_sync+0x1b0 > zfs`txg_sync_thread+0x134 > unix`thread_start+0x4 > 2589 > > genunix`avl_walk+0x44 > zfs`metaslab_ff_alloc+0x9c > zfs`space_map_alloc+0x10 > zfs`metaslab_group_alloc+0x1e0 > zfs`metaslab_alloc_dva+0x114 > zfs`metaslab_alloc+0x2c > zfs`zio_dva_allocate+0x4c > zfs`zio_write_compress+0x1ec > zfs`arc_write+0xe4 > zfs`dbuf_sync+0x6c0 > zfs`dnode_sync+0x35c > zfs`dmu_objset_sync_dnodes+0x6c > zfs`dmu_objset_sync+0x54 > zfs`dsl_dataset_sync+0xc > zfs`dsl_pool_sync+0x64 > zfs`spa_sync+0x1b0 > zfs`txg_sync_thread+0x134 > unix`thread_start+0x4 > 2608 > > unix`cpu_halt+0xc8 > unix`idle+0x128 > unix`thread_start+0x4 > 287026 > > > bash-3.00# > > > bash-3.00# dtrace -n fbt:::entry''{self->vt=vtimestamp;}'' -n fbt:::return''/self->vt/{@[probemod,probefunc]=sum(vtimestamp-self->vt);}'' -n tick-10s''{trunc(@,20);printa(@);exit(0);}'' > dtrace: description ''fbt:::entry'' matched 24134 probes > dtrace: description ''fbt:::return'' matched 24091 probes > dtrace: description ''tick-10s'' matched 1 probe > CPU ID FUNCTION:NAME > 18 49714 :tick-10s > unix disp_anywork 12085227 > SUNW,UltraSPARC-T1 bcopy 12632788 > unix kstat_hold 13037942 > genunix cv_wait 13654225 > genunix p_online 16763096 > genunix syscall_mstate 17423367 > unix disp_getwork 17982885 > genunix ioctl 19162882 > unix kstat_hold_bykid 21893794 > unix putnext 23075984 > e1000g e1000g_intr_pciexpress 25999320 > unix i_ddi_trigger_softint 27343869 > genunix ddi_intr_trigger_softint 27398272 > unix setsoftint 27842783 > unix page_next_scan_large 36041435 > unix resume 37236734 > unix mutex_vector_enter 44091130 > genunix fop_ioctl 51053509 > genunix avl_walk 169227846 > unix cpu_halt 192394373362 > > > bash-3.00# > > > avl_walk again..... > > > > bash-3.00# zpool status > pool: f3-2 > state: ONLINE > status: The pool is formatted using an older on-disk format. The pool can > still be used, but some features are unavailable. > action: Upgrade the pool using ''zpool upgrade''. Once this is done, the > pool will no longer be accessible on older software versions. > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > f3-2 ONLINE 0 0 0 > c2t42d0 ONLINE 0 0 0 > c2t42d1 ONLINE 0 0 0 > c2t42d2 ONLINE 0 0 0 > > errors: No known data errors > > > > bash-3.00# iostat -xnz 1 > [...] > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0.0 16.2 0.0 35.4 0.0 0.1 0.0 3.7 0 2 c2t42d2 > 0.0 31.3 0.0 634.3 0.0 0.1 0.0 4.1 0 3 c2t42d1 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0.0 3.0 0.0 1.5 0.0 0.0 0.0 1.7 0 1 c2t42d2 > 0.0 9.0 0.0 385.7 0.0 0.0 0.0 3.9 0 2 c2t42d1 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0.0 25.0 0.0 230.5 0.0 0.1 0.0 3.8 0 3 c2t42d2 > 0.0 34.0 0.0 632.0 0.0 0.1 0.0 4.1 0 3 c2t42d1 > 0.0 5.0 0.0 5.0 0.0 0.0 0.0 2.5 0 1 c2t42d0 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0.0 9.0 0.0 385.5 0.0 0.0 0.0 3.9 0 2 c2t42d2 > 0.0 3.0 0.0 1.5 0.0 0.0 0.0 1.7 0 1 c2t42d0 > ^C > bash-3.00# > > > Not really much is happening here - looks like CPU is the limiting factor. > > bash-3.00# mdb -k > Loading modules: [ unix krtld genunix specfs dtrace ufs sd px md ip sctp usba fcp fctl qlc crypto lofs zfs random nfs ptm ssd logindmux cpc fcip ] >> ::ps!grep zpool > R 17472 15928 17472 15928 0 0x4a004000 00000303b55647f8 zpool >> 00000303b55647f8::walk thread|::findstack -v > stack pointer for thread 307939f6fc0: 2a102d5cf41 > [ 000002a102d5cf41 cv_wait+0x38() ] > 000002a102d5cff1 txg_wait_synced+0x54(6000c977d90, 337e86, 337e84, 6000c977dd0, 6000c977dd2, 6000c977d88) > 000002a102d5d0a1 zfs_sync+0xb4(6000ca16d00, 0, 6000ca00440, 6000ca00494, 0, 0) > 000002a102d5d151 dounmount+0x2c(6000ca16d00, 0, 30268227998, 300150ac308, 6001210fcc0, 3000650b6) > 000002a102d5d201 umount2+0x140(4e7952, 0, 0, 47878, 0, 0) > 000002a102d5d2e1 syscall_trap32+0xcc(4e7952, 0, 0, 47878, 1ac80, ff396000) > > > > To me it looks like there''re many outstanding transaction in a memory which zfs tries to execute (put to disks) but just traversing over some kind of in-memory meta data consumes entire one CPU which is the limiting factor here and at the end it can''t even saturate disks (well it''s so sloooooow that it wouldn''t saturate a floppy disk). > > Why only one CPU? Why CPU is needed at all (at such quantities) to just flush caches? > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Robert Milkowski
2007-Feb-13 16:02 UTC
[zfs-discuss] zpool export consumes whole CPU and takes more than 30 minutes to complete
Hello Mark,
Tuesday, February 13, 2007, 3:54:36 PM, you wrote:
MM> Robert,
MM> This doesn''t look like cache flushing, rather it looks like we
are
MM> trying to finish up some writes... but are having a hard time allocating
MM> space for them. Is this pool almost 100% full? There are lots of
MM> instances of zio_write_allocate_gang_members(), which indicates a very
MM> high degree of fragmentation in the pool.
pool is 1.2TB in size and there''s 747GB still available.
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com