search for: spa_sync

Displaying 20 results from an estimated 28 matches for "spa_sync".

Did you mean: map_sync
2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
...ite_compress+0x1ec zfs`arc_write+0xe4 zfs`dbuf_sync+0x6c0 zfs`dnode_sync+0x35c zfs`dmu_objset_sync_dnodes+0x6c zfs`dmu_objset_sync+0x54 zfs`dsl_dataset_sync+0xc zfs`dsl_pool_sync+0x64 zfs`spa_sync+0x1b0 zfs`txg_sync_thread+0x134 unix`thread_start+0x4 82 genunix`avl_walk+0xa0 zfs`metaslab_ff_alloc+0x9c zfs`space_map_alloc+0x10 zfs`metaslab_group_alloc+0x1e0 zfs`metaslab_alloc_dva+...
2007 Feb 12
17
NFS/ZFS performance problems - txg_wait_open() deadlocks?
Hi. System is snv_56 sun4u sparc SUNW,Sun-Fire-V440, zil_disable=1 We see many operation on nfs clients to that server really slow (like 90 seconds for unlink()). It''s not a problem with network, there''s also plenty oc CPU available. Storage isn''t saturated either. First strange thing - normally on that server nfsd has about 1500-2500 number of threads. I did
2007 Jul 10
1
ZFS pool fragmentation
...lem about 2 weeks ago http://www.opensolaris.org/jive/thread.jspa?threadID=34423&tstart=0 I found workaround for now - changing recordsize - but I want better solution. The best solution would be a defragmentator tool, but I can see that it is not easy. When ZFS pool is fragmented then: 1. spa_sync function is executing very long ( > 5 seconds ) 2. spa_sync thread often takes 100% CPU 3. metaslab space map is very big There are some changes hidding the problem like this http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6512391 and I hope there will be available in Solaris 10 u...
2008 Jun 10
3
ZFS space map causing slow performance
Hello, I have several ~12TB storage servers using Solaris with ZFS. Two of them have recently developed performance issues where the majority of time in an spa_sync() will be spent in the space_map_*() functions. During this time, "zpool iostat" will show 0 writes to disk, while it does hundreds or thousands of small (~3KB) reads each second, presumably reading space map data from disk to find places to put the new blocks. The result is that it can...
2007 Sep 19
3
ZFS panic when trying to import pool
...y+0x20(ffffff008f722790) zio_next_stage_async+0xbb(ffffff008f722790) zio_nowait+0x11(ffffff008f722790) dmu_objset_sync+0x196(ffffff008e4e5000, ffffff008f722a10, ffffff008f260a80) dsl_dataset_sync+0x5d(ffffff008df47e00, ffffff008f722a10, ffffff008f260a80) dsl_pool_sync+0xb5(ffffff00882fb800, 3fd0f1) spa_sync+0x1c5(ffffff008f2d1a80, 3fd0f1) txg_sync_thread+0x19a(ffffff00882fb800) thread_start+8() And here is the panic message buf: panic[cpu0]/thread=ffffff0001ba2c80: assertion failed: dmu_read(os, smo->smo_object, offset, size, entry_map) == 0 (0 x6 == 0x0), file: ../../common/fs/zfs/space_map.c,...
2007 Oct 12
0
zfs: allocating allocated segment(offset=77984887808 size=66560)
...s:zio_wait_children_ready+18 (88e56dc0) 87a61c34 zfs:zio_next_stage_async+ac (88e56dc0) 87a61c48 zfs:zio_nowait+e (88e56dc0) 87a61c94 zfs:dmu_objset_sync+184 (85fe96c0, 88757ae0,) 87a61cbc zfs:dsl_dataset_sync+40 (813ad000, 88757ae0,) 87a61d0c zfs:dsl_pool_sync+a3 (8291c0c0, 286de2, 0) 87a61d6c zfs:spa_sync+1fc (82a7b900, 286de2, 0) 87a61dc8 zfs:txg_sync_thread+1df (8291c0c0, 0) 87a61dd8 unix:thread_start+8 () on second reboot it also Oct 11 18:17:56 nas ^Mpanic[cpu1]/thread=8f334de0: zfs: allocating allocated segment(offset=77984887808 size=66560) 8f33485c genunix:vcmn_err+16 (3, f4571654, 8f334...
2006 Jul 30
6
zfs mount stuck in zil_replay
...ite_compress+0x1e4 zfs`arc_write+0xbc zfs`dbuf_sync+0x6b0 zfs`dnode_sync+0x300 zfs`dmu_objset_sync_dnodes+0x68 zfs`dmu_objset_sync+0x50 zfs`dsl_dataset_sync+0xc zfs`dsl_pool_sync+0x60 zfs`spa_sync+0xe0 zfs`txg_sync_thread+0x130 unix`thread_start+0x4 316769 genunix`avl_find+0x38 zfs`space_map_remove+0x98 zfs`space_map_load+0x214 zfs`metaslab_activate+0x3c zfs`metaslab_group_alloc+0x1b...
2011 May 03
4
multipl disk failures cause zpool hang
Hi, There seems to be a few threads about zpool hang, do we have a workaround to resolve the hang issue without rebooting ? In my case, I have a pool with disks from external LUNs via a fiber cable. When the cable is unplugged while there is IO in the pool, All zpool related command hang (zpool status, zpool list, etc.), put the cable back does not solve the problem. Eventually, I
2008 Jul 29
2
Unexpected b_hdr change.
...dbuf_sync_list+0x17f dbuf_sync_list() at dbuf_sync_list+0x17f dbuf_sync_list() at dbuf_sync_list+0x17f dbuf_sync_list() at dbuf_sync_list+0x17f dbuf_sync_list() at dbuf_sync_list+0x17f dnode_sync() at dnode_sync+0x9bd dmu_objset_sync() at dmu_objset_sync+0x120 dsl_pool_sync() at dsl_pool_sync+0x72 spa_sync() at spa_sync+0x2f3 txg_sync_thread() at txg_sync_thread+0x2cd Do you have any ideas how to fix it? Kris has a way to reproduce it in his environment and I''m sure he could try a patch, if you could provide one. -- Pawel Jakub Dawidek http://www.wheel.pl pjd at Free...
2006 Oct 05
13
Unbootable system recovery
I have just recently (physically) moved a system with 16 hard drives (for the array) and 1 OS drive; and in doing so, I needed to pull out the 16 drives so that it would be light enough for me to lift. When I plugged the drives back in, initially, it went into a panic-reboot loop. After doing some digging, I deleted the file /etc/zfs/zpool.cache. When I try to import the pool using the zpool
2007 Sep 18
5
ZFS panic in space_map.c line 125
...ge+0x72() zio_checksum_generate+0x5f() zio_next_stage+0x72() zio_write_compress+0x136() zio_next_stage+0x72() zio_wait_for_children+0x49() zio_wait_children_ready+0x15() zio_next_stage_async+0xae() zio_wait+0x2d() arc_write+0xcc() dmu_objset_sync+0x141() dsl_dataset_sync+0x23() dsl_pool_sync+0x7b() spa_sync+0x116() txg_sync_thread+0x115() thread_start+8() It appears ZFS is still able to read the labels from the drive: $ zdb -lv /dev/rdsk/c3t50002AC00039040Bd0p0 -------------------------------------------- LABEL 0 -------------------------------------------- version=3 name=''fpool0...
2007 Jun 16
5
zpool mirror faulted
...einstein genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000060000fb88c0 0000060009dd4dc0 0000060000f506d8 Jun 16 14:44:43 einstein %l4-7: 0000060000f506a8 0000060000f50678 0000060000f505e8 0000060002d5c580 Jun 16 14:44:43 einstein genunix: [ID 723222 kern.notice] 000002a101553940 zfs:spa_sync+1b0 (60000fb8500, a4653, 0, 0, 2a101553cc4, 1) Jun 16 14:44:44 einstein genunix: [ID 179002 kern.notice] %l0-3: 0000060000fb86c0 0000060000fb86d0 0000060000fb85e8 0000060009dd8500 Jun 16 14:44:44 einstein %l4-7: 0000000000000000 000006000360f040 0000060000f50540 0000060000fb8680 Jun 16 14:44:44...
2007 Apr 03
2
ZFS panics with dmu_buf_hold_array
...1, d471b0) Apr 3 16:37:56 columbia genunix: [ID 353471 kern.notice] d4042cdc zfs:metaslab_sync+263 (d471b080, 2b2225, 0) Apr 3 16:37:56 columbia genunix: [ID 353471 kern.notice] d4042d1c zfs:vdev_sync+ba (d34d8680, 2b2225, 0) Apr 3 16:37:56 columbia genunix: [ID 353471 kern.notice] d4042d6c zfs:spa_sync+129 (d995cac0, 2b2225, 0) Apr 3 16:37:56 columbia genunix: [ID 353471 kern.notice] d4042dc8 zfs:txg_sync_thread+1df (d44f9a80, 0) Apr 3 16:37:56 columbia genunix: [ID 353471 kern.notice] d4042dd8 unix:thread_start+8 () Apr 3 16:37:56 columbia unix: [ID 100000 kern.notice] Apr 3 16:37:56 columb...
2008 May 26
5
[Bug 2033] New: ''zfs create'' causes panic if key file doesn''t exist
...570 0000000000000000 000002a100a677d0 zfs:dsl_pool_sync+ec (60010fac140, 7, 60010edd800, 7b309280, 600103c7880, 60010edd830) %l0-3: 0000000000000005 0000000000000000 0000030004e524c8 0000060010fac2c8 %l4-7: 00000600116de2d8 0000000000000000 0000060010fac328 000000000128d800 000002a100a67880 zfs:spa_sync+208 (30004e520c0, 7, 0, 7b3090e0, 30004e52210, 7b309280) %l0-3: 0000030004e521b0 0000060010fac2f8 0000060010fac2c8 00000600101423c0 %l4-7: 0000030004e52288 00000600110bd780 0000060010fac140 0000030004e52240 000002a100a67950 zfs:txg_sync_thread+248 (60010fac140, 7, 7b2fac0b, 5bad, 60010fac258, b...
2008 Jun 04
2
panic on `zfs export` with UNAVAIL disks
...dren, txg) == 0 (0x5 == 0x0), file: ../../common/fs/zfs/spa.c, line: 4095 Jun 4 11:31:47 alusol unix: [ID 100000 kern.notice] Jun 4 11:31:47 alusol genunix: [ID 655072 kern.notice] ffffff0004b1ab30 genunix:assfail3+b9 () Jun 4 11:31:47 alusol genunix: [ID 655072 kern.notice] ffffff0004b1abd0 zfs:spa_sync+5d2 () Jun 4 11:31:47 alusol genunix: [ID 655072 kern.notice] ffffff0004b1ac60 zfs:txg_sync_thread+19a () Jun 4 11:31:47 alusol genunix: [ID 655072 kern.notice] ffffff0004b1ac70 unix:thread_start+8 () ============== after being back up i could do `zfs import` and have the pool back: state: ONL...
2008 Jun 08
2
[Bug 2175] New: running full test/ cli tests cause panic after a number of tests run..
...0 0000000000000000 000002a100787810 zfs:dsl_pool_sync+ec (3000d3c97c0, 21a, 30033b38a00, 7b31dbe0, 600103ff6c0, 30033b38a30) %l0-3: 0000000000000005 0000000000000000 0000030004f1e448 000003000d3c9948 %l4-7: 0000030010495110 0000000000000000 000003000d3c99a8 0000000001292000 000002a1007878c0 zfs:spa_sync+208 (30004f1e040, 21a, 0, 7b31da40, 30004f1e190, 7b31dbe0) %l0-3: 0000030004f1e130 000003000d3c9978 000003000d3c9948 00000300247a85c0 %l4-7: 0000030004f1e208 0000030004f225c0 000003000d3c97c0 0000030004f1e1c0 000002a100787990 zfs:txg_sync_thread+224 (3000d3c97c0, bb0, 3000d3c98e8, 18d8400, 3000...
2006 Oct 10
3
Solaris 10 / ZFS file system major/minor number
Hi, In migrating from **VM to ZFS am I going to have an issue with Major/Minor numbers with NFS mounts? Take the following scenario. 1. NFS clients are connected to an active NFS server that has SAN shared storage between the active and standby nodes in a cluster. 2. The NFS clients are using the major/minor numbers on the active node in the cluster to communicate to the NFS active server. 3.
2007 Sep 14
9
Possible ZFS Bug - Causes OpenSolaris Crash
...dmu_objset_sync+13d () Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice] ffffff000840dad0 zfs:dsl_dataset_sync+5d () Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice] ffffff000840db40 zfs:dsl_pool_sync+b5 () Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice] ffffff000840dbd0 zfs:spa_sync+1c5 () Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice] ffffff000840dc60 zfs:txg_sync_thread+19a () Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice] ffffff000840dc70 unix:thread_start+8 () Sep 13 14:13:22 cypress unix: [ID 100000 kern.notice] Sep 13 14:13:22 cypress genunix: [ID 67...
2007 Oct 02
53
Direct I/O ability with zfs?
We are using MySQL, and love the idea of using zfs for this. We are used to using Direct I/O to bypass file system caching (let the DB do this). Does this exist for zfs? This message posted from opensolaris.org
2011 Nov 25
1
Recovering from kernel panic / reboot cycle importing pool.
..., FTAG, &numbufs, &dbp), file: ../../common/fs/zfs/dmu.c, line: 796 ffffff00f8578880 genunix:assfail+7e () ffffff00f8578930 zfs:dmu_write+17f () ffffff00f85789e0 zfs:space_map_sync+295 () ffffff00f8578a70 zfs:metaslab_sync+2de () ffffff00f8578ad0 zfs:vdev_sync+d5 () ffffff00f8578b80 zfs:spa_sync+44b () ffffff00f8578c20 zfs:txg_sync_thread+247 () ffffff00f8578c30 unix:thread_start+8 () ----- END PANIC ----- I then ran zdb -e -bcsvL tank and got this output after letting it run all night: ----- SNIP ----- Traversing all blocks to verify checksums ... zdb_blkptr_cb: Got error 50 reading...