Displaying 8 results from an estimated 8 matches for "space_map_load".
2007 Sep 19
3
ZFS panic when trying to import pool
...;pool> it panic.
I also dd the disk and tested on another server with OpenSolaris B72 and still the same thing. Here is the panic backtrace:
Stack Backtrace
-----------------
vpanic()
assfail3+0xb9(fffffffff7dde5f0, 6, fffffffff7dde840, 0, fffffffff7dde820, 153)
space_map_load+0x2ef(ffffff008f1290b8, ffffffffc00fc5b0, 1, ffffff008f128d88,
ffffff008dd58ab0)
metaslab_activate+0x66(ffffff008f128d80, 8000000000000000)
metaslab_group_alloc+0x24e(ffffff008f46bcc0, 400, 3fd0f1, 32dc18000,
ffffff008fbeaa80, 0)
metaslab_alloc_dva+0x192(ffffff008f2d1a80, ffffff008f235730, 200,
fff...
2008 May 26
2
indiana as nfs server: crash due to zfs
...(0x40000000 == 0x0), file: ../../common/fs/zfs/space_map.c, line: 315
May 22 02:18:57 ultra20 unix: [ID 100000 kern.notice]
May 22 02:18:57 ultra20 genunix: [ID 655072 kern.notice] ffffff0003d06830 genunix:assfail3+b9 ()
May 22 02:18:57 ultra20 genunix: [ID 655072 kern.notice] ffffff0003d068e0 zfs:space_map_load+2c2 ()
May 22 02:18:57 ultra20 genunix: [ID 655072 kern.notice] ffffff0003d06920 zfs:metaslab_activate+66 ()
May 22 02:18:57 ultra20 genunix: [ID 655072 kern.notice] ffffff0003d069e0 zfs:metaslab_group_alloc+24e ()
May 22 02:18:57 ultra20 genunix: [ID 655072 kern.notice] ffffff0003d06ab0 zfs:metasl...
2007 Oct 12
0
zfs: allocating allocated segment(offset=77984887808 size=66560)
...-rf -
Oct 11 18:10:06 nas ^Mpanic[cpu0]/thread=87a61de0:
zfs: allocating allocated segment(offset=77984887808 size=66560)
87a6185c genunix:vcmn_err+16 (3, f4571654, 87a618)
87a61874 zfs:zfs_panic_recover+28 (f4571654, 2842f400,)
87a618e4 zfs:space_map_add+13f (8cbc1e78, 2842f400,)
87a6196c zfs:space_map_load+27a (8cbc1e78, 8613b5b0,)
87a6199c zfs:metaslab_activate+44 (8cbc1c40, 0, 800000)
87a619f4 zfs:metaslab_group_alloc+22a (8c8e4d80, 400, 0, 2)
87a61a80 zfs:metaslab_alloc_dva+170 (82a7b900, 86057bc0,)
87a61af0 zfs:metaslab_alloc+80 (82a7b900, 86057bc0,)
87a61b40 zfs:zio_dva_allocate+6b (88e56dc0)
87...
2006 Jul 30
6
zfs mount stuck in zil_replay
...7059674296
#
bash-3.00# dtrace -n fbt::space_map_seg_compare:entry''{@[stack()]=count();}''
dtrace: description ''fbt::space_map_seg_compare:entry'' matched 1 probe
^C
genunix`avl_find+0x38
zfs`space_map_add+0x12c
zfs`space_map_load+0x214
zfs`metaslab_activate+0x3c
zfs`metaslab_group_alloc+0x1b0
zfs`metaslab_alloc_dva+0x10c
zfs`metaslab_alloc+0x2c
zfs`zio_dva_allocate+0x50
zfs`zio_write_compress+0x1e4
zfs`arc_write+0xbc...
2007 Sep 18
5
ZFS panic in space_map.c line 125
...000ef savecore: [ID 570001 auth.error] reboot after
panic: assertion failed: ss != NULL, file:
../../common/fs/zfs/space_map.c, line: 125
The server saved a core file, and the resulting backtrace is listed below:
$ mdb unix.0 vmcore.0
> $c
vpanic()
0xfffffffffb9b49f3()
space_map_remove+0x239()
space_map_load+0x17d()
metaslab_activate+0x6f()
metaslab_group_alloc+0x187()
metaslab_alloc_dva+0xab()
metaslab_alloc+0x51()
zio_dva_allocate+0x3f()
zio_next_stage+0x72()
zio_checksum_generate+0x5f()
zio_next_stage+0x72()
zio_write_compress+0x136()
zio_next_stage+0x72()
zio_wait_for_children+0x49()
zio_wait_child...
2008 Jun 26
3
[Bug 2334] New: zpool destroy panics after zfs_force_umount_stress
...35 console login: NOTICE: zio_read_decrypt: called with crypt = 0 on
objset = 0
panic[cpu0]/thread=ffffff0004469c80: assertion failed: sm->sm_space == space
(0x20000000 == 0x1ff8a000), file: ../../common/fs/zfs/space_map.c, line: 359
ffffff00044697f0 genunix:assfail3+b9 ()
ffffff00044698a0 zfs:space_map_load+3a6 ()
ffffff00044698f0 zfs:metaslab_activate+93 ()
ffffff00044699b0 zfs:metaslab_group_alloc+24e ()
ffffff0004469a70 zfs:metaslab_alloc_dva+200 ()
ffffff0004469b30 zfs:metaslab_alloc+156 ()
ffffff0004469b90 zfs:zio_dva_allocate+10d ()
ffffff0004469bd0 zfs:zio_execute+bb ()
ffffff0004469c60 genunix...
2007 Oct 02
53
Direct I/O ability with zfs?
We are using MySQL, and love the idea of using zfs for this. We are used to using Direct I/O to bypass file system caching (let the DB do this). Does this exist for zfs?
This message posted from opensolaris.org
2007 Dec 09
8
zpool kernel panics.
Hi Folks,
I''ve got a 3.9 Tb zpool, and it is casing kernel panics on my Solaris
10 280r (SPARC) server.
The message I get on panic is this:
panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment
(offset=423713792 size=1024)
This seems to come about when the zpool is being used or being
scrubbed - about twice a day at the moment. After the reboot, the
scrub seems to have