Displaying 8 results from an estimated 8 matches for "metaslab_activ".
2007 Sep 19
3
ZFS panic when trying to import pool
...ill the same thing. Here is the panic backtrace:
Stack Backtrace
-----------------
vpanic()
assfail3+0xb9(fffffffff7dde5f0, 6, fffffffff7dde840, 0, fffffffff7dde820, 153)
space_map_load+0x2ef(ffffff008f1290b8, ffffffffc00fc5b0, 1, ffffff008f128d88,
ffffff008dd58ab0)
metaslab_activate+0x66(ffffff008f128d80, 8000000000000000)
metaslab_group_alloc+0x24e(ffffff008f46bcc0, 400, 3fd0f1, 32dc18000,
ffffff008fbeaa80, 0)
metaslab_alloc_dva+0x192(ffffff008f2d1a80, ffffff008f235730, 200,
ffffff008fbeaa80, 0, 0)
metaslab_alloc+0x82(ffffff008f2d1a80, ffffff008f235730, 200, ffffff008fbeaa...
2008 May 26
2
indiana as nfs server: crash due to zfs
...[ID 100000 kern.notice]
May 22 02:18:57 ultra20 genunix: [ID 655072 kern.notice] ffffff0003d06830 genunix:assfail3+b9 ()
May 22 02:18:57 ultra20 genunix: [ID 655072 kern.notice] ffffff0003d068e0 zfs:space_map_load+2c2 ()
May 22 02:18:57 ultra20 genunix: [ID 655072 kern.notice] ffffff0003d06920 zfs:metaslab_activate+66 ()
May 22 02:18:57 ultra20 genunix: [ID 655072 kern.notice] ffffff0003d069e0 zfs:metaslab_group_alloc+24e ()
May 22 02:18:57 ultra20 genunix: [ID 655072 kern.notice] ffffff0003d06ab0 zfs:metaslab_alloc_dva+1da ()
May 22 02:18:57 ultra20 genunix: [ID 655072 kern.notice] ffffff0003d06b50 zfs:me...
2007 Oct 12
0
zfs: allocating allocated segment(offset=77984887808 size=66560)
...de0:
zfs: allocating allocated segment(offset=77984887808 size=66560)
87a6185c genunix:vcmn_err+16 (3, f4571654, 87a618)
87a61874 zfs:zfs_panic_recover+28 (f4571654, 2842f400,)
87a618e4 zfs:space_map_add+13f (8cbc1e78, 2842f400,)
87a6196c zfs:space_map_load+27a (8cbc1e78, 8613b5b0,)
87a6199c zfs:metaslab_activate+44 (8cbc1c40, 0, 800000)
87a619f4 zfs:metaslab_group_alloc+22a (8c8e4d80, 400, 0, 2)
87a61a80 zfs:metaslab_alloc_dva+170 (82a7b900, 86057bc0,)
87a61af0 zfs:metaslab_alloc+80 (82a7b900, 86057bc0,)
87a61b40 zfs:zio_dva_allocate+6b (88e56dc0)
87a61b58 zfs:zio_next_stage+aa (88e56dc0)
87a61b70 zfs:z...
2006 Jul 30
6
zfs mount stuck in zil_replay
...fbt::space_map_seg_compare:entry''{@[stack()]=count();}''
dtrace: description ''fbt::space_map_seg_compare:entry'' matched 1 probe
^C
genunix`avl_find+0x38
zfs`space_map_add+0x12c
zfs`space_map_load+0x214
zfs`metaslab_activate+0x3c
zfs`metaslab_group_alloc+0x1b0
zfs`metaslab_alloc_dva+0x10c
zfs`metaslab_alloc+0x2c
zfs`zio_dva_allocate+0x50
zfs`zio_write_compress+0x1e4
zfs`arc_write+0xbc
zfs`dbuf_sync+0x6b0
z...
2007 Sep 18
5
ZFS panic in space_map.c line 125
...001 auth.error] reboot after
panic: assertion failed: ss != NULL, file:
../../common/fs/zfs/space_map.c, line: 125
The server saved a core file, and the resulting backtrace is listed below:
$ mdb unix.0 vmcore.0
> $c
vpanic()
0xfffffffffb9b49f3()
space_map_remove+0x239()
space_map_load+0x17d()
metaslab_activate+0x6f()
metaslab_group_alloc+0x187()
metaslab_alloc_dva+0xab()
metaslab_alloc+0x51()
zio_dva_allocate+0x3f()
zio_next_stage+0x72()
zio_checksum_generate+0x5f()
zio_next_stage+0x72()
zio_write_compress+0x136()
zio_next_stage+0x72()
zio_wait_for_children+0x49()
zio_wait_children_ready+0x15()
zio_ne...
2008 Jun 26
3
[Bug 2334] New: zpool destroy panics after zfs_force_umount_stress
...called with crypt = 0 on
objset = 0
panic[cpu0]/thread=ffffff0004469c80: assertion failed: sm->sm_space == space
(0x20000000 == 0x1ff8a000), file: ../../common/fs/zfs/space_map.c, line: 359
ffffff00044697f0 genunix:assfail3+b9 ()
ffffff00044698a0 zfs:space_map_load+3a6 ()
ffffff00044698f0 zfs:metaslab_activate+93 ()
ffffff00044699b0 zfs:metaslab_group_alloc+24e ()
ffffff0004469a70 zfs:metaslab_alloc_dva+200 ()
ffffff0004469b30 zfs:metaslab_alloc+156 ()
ffffff0004469b90 zfs:zio_dva_allocate+10d ()
ffffff0004469bd0 zfs:zio_execute+bb ()
ffffff0004469c60 genunix:taskq_thread+1cb ()
ffffff0004469c70 unix:...
2007 Oct 02
53
Direct I/O ability with zfs?
We are using MySQL, and love the idea of using zfs for this. We are used to using Direct I/O to bypass file system caching (let the DB do this). Does this exist for zfs?
This message posted from opensolaris.org
2007 Dec 09
8
zpool kernel panics.
Hi Folks,
I''ve got a 3.9 Tb zpool, and it is casing kernel panics on my Solaris
10 280r (SPARC) server.
The message I get on panic is this:
panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment
(offset=423713792 size=1024)
This seems to come about when the zpool is being used or being
scrubbed - about twice a day at the moment. After the reboot, the
scrub seems to have