Hi, This system is a v240 running snv_34 bfu''d to daily.0307-nd. On a t3b partner pair I was modifying the volslice setup. One volume had 3 slices 0/1/2 with zfs using slice 2 (a test pool I''d forgotten about ...) and the other slices 3/4 (in use, but not by zfs). After unmounting the ufs filesystems using slices 4 and 5 I deleted slices 0/1/2 and created a new slice spanning all of them (ie all of the first volume). Slices 3/4 remained intact. So after slicing and dicing I remount the ufs bits on the second volume and then remember the zfs test pool created a few weeks back. So I ''zpool status -v'' to see if it is still about: Mar 31 20:5WARNING: /scsi_vhci/ssd at g60020f200000f14b418f72c80006d936 (ssd43): offline 8:14 tb2 scsi: WARNING: /scsi_vhci/ssd at g60020f200000f14b418f72c80006dWARNING: /scsi_vhci/ssd at g60020f200000f14b418f72c80006d936 (ssd43): i/o to invalid geometry panic[cpu0]/thread=2a1008d7cc0: assertion failed: 0 == dmu_read(os, smo->smo_object, offset, size, entry_map), file: ../../common/fs/zfs/space_map.c, line: 296 000002a1008d6fc0 genunix:assfail+7c (7b6499b0, 7b6499f0, 128, 183e000, 11e4c00, 0) %l0-3: 0000060002245268 0000001400000000 000000007b649400 0000000000000001 %l4-7: 000002a1008d7058 0000000000000000 0000000001870400 0000000000000000 000002a1008d7070 zfs:space_map_load+154 (600015150c0, 600020f43a8, 33b, 1380000000, 19d8, 60004014000) %l0-3: 00000000000019d8 0000000000000000 0000000000000001 000000007b61ce80 %l4-7: 000000007b61cc28 00000000000019d8 00007fffffffffff 0000000000007fff 000002a1008d7130 zfs:metaslab_group_alloc+84 (60001446480, 6000155b058, 600, 2a1008d7298, 1c7d4, 60001514d80) %l0-3: 0000000000000000 0000000080000000 0000000000000001 0000000000000001 %l4-7: 00000000000005ff 00000600020f43a8 0000000080000000 fffffffffffffe00 000002a1008d71e0 zfs:metaslab_alloc+4c (60001446480, 600, 60002099640, 1c7d4, 600, 600016b7a80) %l0-3: 00000600015952c8 000006000155b058 0000060002263600 000000000000003f %l4-7: 0000000000000000 000006000155b058 0000000000000000 000002a1008d7298 000002a1008d72a0 zfs:zio_dva_allocate+60 (6000f1720c0, 48, 703ca470, 703ca400, 9, 20001) %l0-3: 00000000703ca400 00000000018aa400 0000060002099640 0000000000000000 %l4-7: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000002a1008d7350 zfs:zio_write_compress+1e4 (6000f1720c0, 8fe20b, 8fe000, 1f001f, 3, 60002099640) %l0-3: 000000000000ffff 0000000000000002 0000000000000003 0000000000000600 %l4-7: 0000000000000000 00000000001f0000 000000000000fc00 000000000000001f 000002a1008d7420 zfs:arc_write+b4 (6000f1720c0, 60001446480, 7, 3, 1c7d4, 60002099640) %l0-3: ffffffffffffffff 0000060002091488 0000060002079408 0000000000000004 %l4-7: 0000000000000000 000002a1008d75f8 000006000207bc28 0000000000000004 000002a1008d7510 zfs:dbuf_sync+68c (60002091488, 6000e6fb2c0, b, 7, 1c7d4, 60000d4eb40) %l0-3: 000000007b605c00 000006000207bc28 0000000000000004 00000000703ca400 %l4-7: 0000000000000000 0000000000000000 0000060002099640 0000000000000001 000002a1008d7620 zfs:dnode_sync+310 (60002245268, 0, 6000e6fb2c0, 6000daf71d0, 600022338b8, 1) %l0-3: 0000060002245268 0000060002245268 0000000000000000 0000000000000001 %l4-7: 0000000000000000 0000000000000000 0000060002091488 0000060002245318 000002a1008d76e0 zfs:dmu_objset_sync_dnodes+68 (60000d4eb40, 60000d4ec20, 6000daf71d0, 60002245268, 0, 6000e6fb2c0) %l0-3: 0000060000d4eb40 00000600022338d0 00000600022338c8 000006000225bc00 %l4-7: 00000000703d0418 00000000703d0000 00000000703d0000 0000000000000000 000002a1008d7790 zfs:dmu_objset_sync+50 (60000d4eb40, 6000daf71d0, 0, 6000207b908, 0, 1c7d4) %l0-3: 0000060000d4eb40 000000000001c7d4 0000000000000000 000000000001c7d4 %l4-7: 00000300000b14e8 0000060000d4ec20 0000060000d4eb40 0000000000000000 000002a1008d78a0 zfs:dsl_pool_sync+108 (300000b1380, 1c7d4, 60000d4eb40, 60000d4eb40, 0, 0) %l0-3: 0000000000000320 fffffffffffffff8 0000060001446828 0000000000000000 %l4-7: 000006000daf71d0 00000300000b14b8 00000300000b14e8 00000300000b1428 000002a1008d7950 zfs:spa_sync+dc (60001446480, 1c7d4, 0, 0, 60000d4eb68, 6000daf71d0) %l0-3: 0000000000000190 0000060001446578 00000300000b1380 0000000000000000 %l4-7: 0000030000f20000 00000000018af898 0000000000000af0 00000600014465e8 000002a1008d7a00 zfs:txg_sync_thread+130 (300000b1380, 1c7d4, 2a1008d7ab0, 300000b14a0, 300000b1492, 300000b1490) %l0-3: 0000000000000000 00000300000b1450 00000300000b1458 00000300000b1496 %l4-7: 00000300000b1494 00000300000b1448 000000000001c7d5 000000000001c7d4 Certainly whacking the storage from underneath zfs is not entirely fair, but the zfs filesystem was idle (not touched for weeks) and since real disks presumably fail horribly too I''d have expected a more polite error. I don''t know if this is a new bug or addressed since daily.0307-nd. I have the dump if anybody wants to have a look. Cheers Gavin