Hi, My file server is crashing about every 15 minutes at the moment. The panic looks like: Jun 8 11:48:43 zfs kernel: panic: Solaris(panic): zfs: allocating allocated segment(offset=12922221670400 size=24576) Jun 8 11:48:43 zfs kernel: Jun 8 11:48:43 zfs kernel: cpuid = 1 Jun 8 11:48:43 zfs kernel: KDB: stack backtrace: Jun 8 11:48:43 zfs kernel: #0 0xffffffff80aada57 at kdb_backtrace+0x67 Jun 8 11:48:43 zfs kernel: #1 0xffffffff80a6bb36 at vpanic+0x186 Jun 8 11:48:43 zfs kernel: #2 0xffffffff80a6b9a3 at panic+0x43 Jun 8 11:48:43 zfs kernel: #3 0xffffffff82488192 at vcmn_err+0xc2 Jun 8 11:48:43 zfs kernel: #4 0xffffffff821f73ba at zfs_panic_recover+0x5a Jun 8 11:48:43 zfs kernel: #5 0xffffffff821dff8f at range_tree_add+0x20f Jun 8 11:48:43 zfs kernel: #6 0xffffffff821deb06 at metaslab_free_dva+0x276 Jun 8 11:48:43 zfs kernel: #7 0xffffffff821debc1 at metaslab_free+0x91 Jun 8 11:48:43 zfs kernel: #8 0xffffffff8222296a at zio_dva_free+0x1a Jun 8 11:48:43 zfs kernel: #9 0xffffffff8221f6cc at zio_execute+0xac Jun 8 11:48:43 zfs kernel: #10 0xffffffff80abe827 at taskqueue_run_locked+0x127 Jun 8 11:48:43 zfs kernel: #11 0xffffffff80abf9c8 at taskqueue_thread_loop+0xc8 Jun 8 11:48:43 zfs kernel: #12 0xffffffff80a2f7d5 at fork_exit+0x85 Jun 8 11:48:43 zfs kernel: #13 0xffffffff80ec4abe at fork_trampoline+0xe Jun 8 11:48:43 zfs kernel: Uptime: 9m7s Maybe a known bug? Is there anything I can do about this? Any debugging needed? System is running FreeBSD 11.1-RELEASE-p10 Thanx, --WjW
On 08/06/2018 13:02, Willem Jan Withagen wrote:> My file server is crashing about every 15 minutes at the moment. > The panic looks like: > > Jun 8 11:48:43 zfs kernel: panic: Solaris(panic): zfs: allocating > allocated segment(offset=12922221670400 size=24576) > Jun 8 11:48:43 zfs kernel: > Jun 8 11:48:43 zfs kernel: cpuid = 1 > Jun 8 11:48:43 zfs kernel: KDB: stack backtrace: > Jun 8 11:48:43 zfs kernel: #0 0xffffffff80aada57 at kdb_backtrace+0x67 > Jun 8 11:48:43 zfs kernel: #1 0xffffffff80a6bb36 at vpanic+0x186 > Jun 8 11:48:43 zfs kernel: #2 0xffffffff80a6b9a3 at panic+0x43 > Jun 8 11:48:43 zfs kernel: #3 0xffffffff82488192 at vcmn_err+0xc2 > Jun 8 11:48:43 zfs kernel: #4 0xffffffff821f73ba at zfs_panic_recover+0x5a > Jun 8 11:48:43 zfs kernel: #5 0xffffffff821dff8f at range_tree_add+0x20f > Jun 8 11:48:43 zfs kernel: #6 0xffffffff821deb06 at metaslab_free_dva+0x276 > Jun 8 11:48:43 zfs kernel: #7 0xffffffff821debc1 at metaslab_free+0x91 > Jun 8 11:48:43 zfs kernel: #8 0xffffffff8222296a at zio_dva_free+0x1a > Jun 8 11:48:43 zfs kernel: #9 0xffffffff8221f6cc at zio_execute+0xac > Jun 8 11:48:43 zfs kernel: #10 0xffffffff80abe827 at > taskqueue_run_locked+0x127 > Jun 8 11:48:43 zfs kernel: #11 0xffffffff80abf9c8 at > taskqueue_thread_loop+0xc8 > Jun 8 11:48:43 zfs kernel: #12 0xffffffff80a2f7d5 at fork_exit+0x85 > Jun 8 11:48:43 zfs kernel: #13 0xffffffff80ec4abe at fork_trampoline+0xe > Jun 8 11:48:43 zfs kernel: Uptime: 9m7s > > Maybe a known bug? > Is there anything I can do about this? > Any debugging needed?Sorry to inform you but your on-disk data got corrupted. The most straightforward thing you can do is try to save data from the pool in readonly mode. -- Andriy Gapon