search for: space_map_remov

Displaying 8 results from an estimated 8 matches for "space_map_remov".

Did you mean: space_map_remove
2007 Feb 11
0
unable to mount legacy vol - panic in zfs:space_map_remove - zdb crashes
..../../common/fs/zfs/space_map.c, line: 125 000002a100b74c40 genunix:assfail+74 (7b252450, 7b252460, 7d, 183d400, 11eb000, 0) %l0-3: 0000000000000000 0000000000000000 00000000011e5368 000003000b6d2528 %l4-7: 00000000011eb000 0000000000000000 000000000186f800 0000000000000000 000002a100b74cf0 zfs:space_map_remove+b8 (60001db9eb8, 17698c0000, 20000, 7b252400, 7b252400, 7b252400) %l0-3: 0000000000000000 00000017698e0000 00000017623a0000 000003000b6d4fd8 %l4-7: 000003000b6d5050 0000001762360000 000000007b252000 00000017623e0000 ... Noticing the lun was nearly full, I added a 2nd 100g lun to the pool. Mu...
2007 Nov 14
0
space_map.c ''ss == NULL'' panic strikes back.
...here. I''ve two ideas: 1. Because it happend on system crash or something, we can expect that this is caused by the last change. If so, we could try corrupting most recent uberblock, so ZFS will pick up previous uberblock. 2. Instead of pancing in space_map_add(), we could try to space_map_remove() the offensive entry, eg: - VERIFY(ss == NULL); + if (ss != NULL) { + space_map_remove(sm, ss->ss_start, ss->ss_end); + goto again; + } Both of those ideas can make things worse, so I want to know what damage can be done using those method, or even better, what else (safer) we can try?...
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
...;= 0x67 00009000), file: ../../common/fs/zfs/space_map.c, line: 126 Mar 21 11:09:17 SERVER142 unix: [ID 100000 kern.notice] Mar 21 11:09:17 SERVER142 genunix: [ID 802836 kern.notice] fffffe800047e320 fffffffffb9ad0b9 () Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047e3a0 zfs:space_map_remove+1a3 () Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047e3d0 zfs:space_map_claim+32 () Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047e410 zfs:zfsctl_ops_root+2f9db09d () Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047e450 zfs:z...
2007 Sep 18
5
ZFS panic in space_map.c line 125
...rror: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 > $c vpanic() 0xfffffffffb9b49f3() space_map_remove+0x239() space_map_load+0x17d() metaslab_activate+0x6f() metaslab_group_alloc+0x187() metaslab_alloc_dva+0xab() metaslab_alloc+0x51() zio_dva_allocate+0x3f() zio_next_stage+0x72() zio_checksum_generate+0x5f() zio_next_stage+0x72() zio_write_compress+0x136() zio_next_stage+0x72() zio_wait_for_childr...
2006 Jul 30
6
zfs mount stuck in zil_replay
...x68 zfs`dmu_objset_sync+0x50 zfs`dsl_dataset_sync+0xc zfs`dsl_pool_sync+0x60 zfs`spa_sync+0xe0 zfs`txg_sync_thread+0x130 unix`thread_start+0x4 316769 genunix`avl_find+0x38 zfs`space_map_remove+0x98 zfs`space_map_load+0x214 zfs`metaslab_activate+0x3c zfs`metaslab_group_alloc+0x1b0 zfs`metaslab_alloc_dva+0x10c zfs`metaslab_alloc+0x2c zfs`zio_dva_allocate+0x50 zfs`zio_write_compress+0x1e4...
2007 Nov 25
2
Corrupted pool
...Solaris 10 servers, and the box paniced this evening with the following stack trace: Nov 24 04:03:35 foo unix: [ID 100000 kern.notice] Nov 24 04:03:35 foo genunix: [ID 802836 kern.notice] fffffe80004a14d0 fffffffffb9b49f3 () Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1550 zfs:space_map_remove+239 () Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1580 zfs:space_map_claim+32 () Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a15c0 zfs:zfsctl_ops_root+2f95204d () Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1600 zfs:zfsctl_ops_root+2f9...
2007 Oct 02
53
Direct I/O ability with zfs?
We are using MySQL, and love the idea of using zfs for this. We are used to using Direct I/O to bypass file system caching (let the DB do this). Does this exist for zfs? This message posted from opensolaris.org
2007 Dec 09
8
zpool kernel panics.
Hi Folks, I''ve got a 3.9 Tb zpool, and it is casing kernel panics on my Solaris 10 280r (SPARC) server. The message I get on panic is this: panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment (offset=423713792 size=1024) This seems to come about when the zpool is being used or being scrubbed - about twice a day at the moment. After the reboot, the scrub seems to have