Hi All, Last week we had a panic caused by ZFS and then we had a corrupted zpool! Today we are doing some test with the same data, but on a different server/storage array. While copying the data ... panic! And again we had a corrupted zpool!! Mar 28 12:38:19 SERVER144 genunix: [ID 403854 kern.notice] assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 Mar 28 12:38:19 SERVER144 unix: [ID 100000 kern.notice] Mar 28 12:38:19 SERVER144 genunix: [ID 802836 kern.notice] fffffe80002db620 fffffffffb9acff3 () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db6a0 zfs:space_map_remove+239 () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db710 zfs:space_map_load+17d () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db740 zfs:zfsctl_ops_root+2fb80397 () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db7c0 zfs:metaslab_group_alloc+186 () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db850 zfs:metaslab_alloc_dva+ab () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db8a0 zfs:zfsctl_ops_root+2fb81189 () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db8c0 zfs:zio_dva_allocate+3f () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db8d0 zfs:zio_next_stage+72 () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db8f0 zfs:zio_checksum_generate+5f () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db900 zfs:zio_next_stage+72 () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db950 zfs:zio_write_compress+136 () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db960 zfs:zio_next_stage+72 () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db990 zfs:zio_wait_for_children+49 () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db9a0 zfs:zio_wait_children_ready+15 () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db9b0 zfs:zfsctl_ops_root+2fb9a1e6 () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db9e0 zfs:zio_wait+2d () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002dba70 zfs:arc_write+cc () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002dbb00 zfs:dmu_objset_sync+141 () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002dbb20 zfs:dsl_dataset_sync+23 () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002dbb70 zfs:dsl_pool_sync+6b () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002dbbd0 zfs:spa_sync+fa () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002dbc60 zfs:txg_sync_thread+115 () Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002dbc70 unix:thread_start+8 () Mar 28 12:38:19 SERVER144 unix: [ID 100000 kern.notice] Mar 28 12:38:19 SERVER144 genunix: [ID 672855 kern.notice] syncing file systems... Mar 28 12:38:19 SERVER144 genunix: [ID 733762 kern.notice] 1 Mar 28 12:38:20 SERVER144 genunix: [ID 904073 kern.notice] done Mar 28 12:38:21 SERVER144 genunix: [ID 111219 kern.notice] dumping to /dev/dsk/c1t0d0s4, offset 1677983744, content: kernel Mar 28 12:38:26 SERVER144 genunix: [ID 409368 kern.notice] ^M100% done: 129179 pages dumped, compression ratio 5.16, Mar 28 12:38:26 SERVER144 genunix: [ID 851671 kern.notice] dump succeeded Suggestion? We have about 2TB free on that zpool and were copying about 70GB. tnx, Gino This message posted from opensolaris.org
I forgot to mention we are using S10U2. Gino This message posted from opensolaris.org
Hi Gino, this looks like an instance of bug 6458218 (see http://bugs.opensolaris.org/view_bug.do?bug_id=6458218) The fix for this bug is integrated into snv_60. Kind regards, Victor Gino Ruopolo wrote:> Hi All, > > Last week we had a panic caused by ZFS and then we had a corrupted zpool! > Today we are doing some test with the same data, but on a different server/storage array. While copying the data ... panic! > And again we had a corrupted zpool!! > > > > Mar 28 12:38:19 SERVER144 genunix: [ID 403854 kern.notice] assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 > Mar 28 12:38:19 SERVER144 unix: [ID 100000 kern.notice] > Mar 28 12:38:19 SERVER144 genunix: [ID 802836 kern.notice] fffffe80002db620 fffffffffb9acff3 () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db6a0 zfs:space_map_remove+239 () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db710 zfs:space_map_load+17d () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db740 zfs:zfsctl_ops_root+2fb80397 () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db7c0 zfs:metaslab_group_alloc+186 () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db850 zfs:metaslab_alloc_dva+ab () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db8a0 zfs:zfsctl_ops_root+2fb81189 () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db8c0 zfs:zio_dva_allocate+3f () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db8d0 zfs:zio_next_stage+72 () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db8f0 zfs:zio_checksum_generate+5f () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db900 zfs:zio_next_stage+72 () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db950 zfs:zio_write_compress+136 () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db960 zfs:zio_next_stage+72 () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db990 zfs:zio_wait_for_children+49 () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db9a0 zfs:zio_wait_children_ready+15 () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db9b0 zfs:zfsctl_ops_root+2fb9a1e6 () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002db9e0 zfs:zio_wait+2d () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002dba70 zfs:arc_write+cc () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002dbb00 zfs:dmu_objset_sync+141 () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002dbb20 zfs:dsl_dataset_sync+23 () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002dbb70 zfs:dsl_pool_sync+6b () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002dbbd0 zfs:spa_sync+fa () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002dbc60 zfs:txg_sync_thread+115 () > Mar 28 12:38:19 SERVER144 genunix: [ID 655072 kern.notice] fffffe80002dbc70 unix:thread_start+8 () > Mar 28 12:38:19 SERVER144 unix: [ID 100000 kern.notice] > Mar 28 12:38:19 SERVER144 genunix: [ID 672855 kern.notice] syncing file systems... > Mar 28 12:38:19 SERVER144 genunix: [ID 733762 kern.notice] 1 > Mar 28 12:38:20 SERVER144 genunix: [ID 904073 kern.notice] done > Mar 28 12:38:21 SERVER144 genunix: [ID 111219 kern.notice] dumping to /dev/dsk/c1t0d0s4, offset 1677983744, content: kernel > Mar 28 12:38:26 SERVER144 genunix: [ID 409368 kern.notice] ^M100% done: 129179 pages dumped, compression ratio 5.16, > Mar 28 12:38:26 SERVER144 genunix: [ID 851671 kern.notice] dump succeeded > > Suggestion? > We have about 2TB free on that zpool and were copying about 70GB. > > tnx, > Gino > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
zfs-discuss-bounces at opensolaris.org wrote on 03/28/2007 06:34:12 AM:> Hi Gino, > > this looks like an instance of bug 6458218 (see > http://bugs.opensolaris.org/view_bug.do?bug_id=6458218) > > The fix for this bug is integrated into snv_60. > > Kind regards, > VictorI know I may be somewhat of an outsider here, but we use full Solaris releases + patches. Is there any way to ref that a fix has made it back to Solaris release or patch at the same level as the Nevada bit push note in the bug reports? Or is there another way to tell where bits are pushed for "real" (tm) Solaris releases? I seem to spend a lot of time trying to find specific zfs related patches or status when it comes to supported Solaris releases... -Wade
I was thinking of something similar. When we go to download the various bits (iso-a.zip through iso-e.zip and the md5sums), it seems like there should also be Release Notes on the list of files being downloaded. Similar to the Java release notes, I would expect it to point out which bugs were fixed, major changes to how the main tools work, etc... Although, I guess this isn''t really the right list for that.... Malachi On 3/28/07, Wade.Stuart at fallon.com <Wade.Stuart at fallon.com> wrote:> > > > > > > > zfs-discuss-bounces at opensolaris.org wrote on 03/28/2007 06:34:12 AM: > > > Hi Gino, > > > > this looks like an instance of bug 6458218 (see > > http://bugs.opensolaris.org/view_bug.do?bug_id=6458218) > > > > The fix for this bug is integrated into snv_60. > > > > Kind regards, > > Victor > > I know I may be somewhat of an outsider here, but we use full Solaris > releases + patches. Is there any way to ref that a fix has made it back > to > Solaris release or patch at the same level as the Nevada bit push note in > the bug reports? Or is there another way to tell where bits are pushed > for > "real" (tm) Solaris releases? I seem to spend a lot of time trying to > find > specific zfs related patches or status when it comes to supported Solaris > releases... > > -Wade > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070328/6377541d/attachment.html>
Gino Ruopolo wrote:> Hi All, > > Last week we had a panic caused by ZFS and then we had a corrupted > zpool! Today we are doing some test with the same data, but on a > different server/storage array. While copying the data ... panic! > And again we had a corrupted zpool!!This is bug 6458218, which was fixed in snv_60 and will be fixed in s10u4. To recover from this situation, try running build 60 or later, and put ''set zfs:zfs_recover=1'' in /etc/system. This should allow you to read your pool again. (However, we can''t recommend running in this state forever; you should backup and restore your pool ASAP.) We''re very sorry that you''ve encountered this bug. Unfortunately, it was very difficult to track down, so it existed for quite some time. Thankfully, it is now fixed so you don''t need to hit it anymore. --matt
> Gino Ruopolo wrote: > > Hi All, > > > > Last week we had a panic caused by ZFS and then we > had a corrupted > > zpool! Today we are doing some test with the same > data, but on a > > different server/storage array. While copying the > data ... panic! > > And again we had a corrupted zpool!! > > This is bug 6458218, which was fixed in snv_60 and > will be fixed in s10u4. >Thank you Matt. Please let''s consider to advise all the people using ZFS to upgrade to snv_60. This bug can be very dangerous. gino This message posted from opensolaris.org
Any chance these fixes will make it into the normal Solaris R&S patches? This message posted from opensolaris.org
If the fix is put into Solaris 10 update 4 (as Matt expects) it should trickle into the R&S patch cluster as well. This message posted from opensolaris.org
Hi Matt, trying to import our corrupted zpool with snv_60 and ''set zfs:zfs_recover=1'' in /etc/system give us: Apr 3 20:35:56 SERVER141 ^Mpanic[cpu3]/thread=fffffffec3860f20: Apr 3 20:35:56 SERVER141 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x670000b800 <= 0x6700009000), file: ../ ../common/fs/zfs/space_map.c, line: 158 Apr 3 20:35:56 SERVER141 unix: [ID 100000 kern.notice] Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae0d0 genunix:assfail3+b9 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae160 zfs:space_map_remove+1da () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae1b0 zfs:space_map_claim+44 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae210 zfs:metaslab_claim_dva+11c () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae270 zfs:metaslab_claim+91 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae2a0 zfs:zio_dva_claim+25 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae2c0 zfs:zio_next_stage+b3 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae2e0 zfs:zio_gang_pipeline+31 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae300 zfs:zio_next_stage+b3 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae350 zfs:zio_wait_for_children+5d () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae370 zfs:zio_wait_children_ready+20 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae390 zfs:zio_next_stage_async+bb () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae3d0 zfs:zio_wait+2e () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae420 zfs:zil_claim_log_block+61 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae550 zfs:zil_parse+175 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae5a0 zfs:zil_claim+88 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae770 zfs:dmu_objset_find+27f () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae940 zfs:dmu_objset_find+ed () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aeb10 zfs:dmu_objset_find+ed () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aebb0 zfs:spa_load+8b4 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aec20 zfs:spa_import+94 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aec60 zfs:zfs_ioc_pool_import+73 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aece0 zfs:zfsdev_ioctl+119 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aed20 genunix:cdev_ioctl+48 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aed60 specfs:spec_ioctl+86 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aedc0 genunix:fop_ioctl+37 () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aeec0 genunix:ioctl+16b () Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aef10 unix:brand_sys_syscall32+1a3 () Apr 3 20:35:56 SERVER141 unix: [ID 100000 kern.notice] Apr 3 20:35:56 SERVER141 genunix: [ID 672855 kern.notice] syncing file systems... Apr 3 20:35:56 SERVER141 genunix: [ID 733762 kern.notice] 2 This message posted from opensolaris.org
Gino, I just had a similar experience and was able to import the pool when I added the readonly option (zpool import -f -o ro .... ) Ernie Gino Ruopolo wrote:> Hi Matt, > > trying to import our corrupted zpool with snv_60 and ''set zfs:zfs_recover=1'' in /etc/system give us: > > Apr 3 20:35:56 SERVER141 ^Mpanic[cpu3]/thread=fffffffec3860f20: > Apr 3 20:35:56 SERVER141 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x670000b800 <= 0x6700009000), file: ../ > ../common/fs/zfs/space_map.c, line: 158 > Apr 3 20:35:56 SERVER141 unix: [ID 100000 kern.notice] > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae0d0 genunix:assfail3+b9 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae160 zfs:space_map_remove+1da () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae1b0 zfs:space_map_claim+44 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae210 zfs:metaslab_claim_dva+11c () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae270 zfs:metaslab_claim+91 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae2a0 zfs:zio_dva_claim+25 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae2c0 zfs:zio_next_stage+b3 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae2e0 zfs:zio_gang_pipeline+31 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae300 zfs:zio_next_stage+b3 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae350 zfs:zio_wait_for_children+5d () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae370 zfs:zio_wait_children_ready+20 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae390 zfs:zio_next_stage_async+bb () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae3d0 zfs:zio_wait+2e () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae420 zfs:zil_claim_log_block+61 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae550 zfs:zil_parse+175 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae5a0 zfs:zil_claim+88 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae770 zfs:dmu_objset_find+27f () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172ae940 zfs:dmu_objset_find+ed () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aeb10 zfs:dmu_objset_find+ed () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aebb0 zfs:spa_load+8b4 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aec20 zfs:spa_import+94 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aec60 zfs:zfs_ioc_pool_import+73 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aece0 zfs:zfsdev_ioctl+119 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aed20 genunix:cdev_ioctl+48 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aed60 specfs:spec_ioctl+86 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aedc0 genunix:fop_ioctl+37 () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aeec0 genunix:ioctl+16b () > Apr 3 20:35:56 SERVER141 genunix: [ID 655072 kern.notice] ffffff00172aef10 unix:brand_sys_syscall32+1a3 () > Apr 3 20:35:56 SERVER141 unix: [ID 100000 kern.notice] > Apr 3 20:35:56 SERVER141 genunix: [ID 672855 kern.notice] syncing file systems... > Apr 3 20:35:56 SERVER141 genunix: [ID 733762 kern.notice] 2 > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >
> Gino, > I just had a similar experience and was able to > import the pool when I > added the readonly option (zpool import -f -o ro > .... ) >no way ... We still get a panic :( gino This message posted from opensolaris.org
Is there anyone interested in a kernel dump? We are sill unable to import the corrupted zpool, even in readonly mode .. Apr 5 22:27:34 SERVER142 ^Mpanic[cpu2]/thread=fffffffec9eef0e0: Apr 5 22:27:34 SERVER142 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x670000b800 <= 0x6700009000), file: ../../common/fs/zfs/spa ce_map.c, line: 158 Apr 5 22:27:34 SERVER142 unix: [ID 100000 kern.notice] Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a0d0 genunix:assfail3+b9 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a160 zfs:space_map_remove+1da () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a1b0 zfs:space_map_claim+44 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a210 zfs:metaslab_claim_dva+11c () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a270 zfs:metaslab_claim+91 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a2a0 zfs:zio_dva_claim+25 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a2c0 zfs:zio_next_stage+b3 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a2e0 zfs:zio_gang_pipeline+31 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a300 zfs:zio_next_stage+b3 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a350 zfs:zio_wait_for_children+5d () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a370 zfs:zio_wait_children_ready+20 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a390 zfs:zio_next_stage_async+bb () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a3d0 zfs:zio_wait+2e () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a420 zfs:zil_claim_log_block+61 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a550 zfs:zil_parse+175 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a5a0 zfs:zil_claim+88 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a770 zfs:dmu_objset_find+27f () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3a940 zfs:dmu_objset_find+ed () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3ab10 zfs:dmu_objset_find+ed () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3abb0 zfs:spa_load+8b4 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3ac20 zfs:spa_import+94 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3ac60 zfs:zfs_ioc_pool_import+73 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3ace0 zfs:zfsdev_ioctl+119 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3ad20 genunix:cdev_ioctl+48 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3ad60 specfs:spec_ioctl+86 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3adc0 genunix:fop_ioctl+37 () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3aec0 genunix:ioctl+16b () Apr 5 22:27:34 SERVER142 genunix: [ID 655072 kern.notice] ffffff0017c3af10 unix:brand_sys_syscall32+1a3 () Apr 5 22:27:34 SERVER142 unix: [ID 100000 kern.notice] Apr 5 22:27:34 SERVER142 genunix: [ID 672855 kern.notice] syncing file systems... Apr 5 22:27:34 SERVER142 genunix: [ID 733762 kern.notice] 45 Apr 5 22:27:35 SERVER142 genunix: [ID 733762 kern.notice] 1 Apr 5 22:27:36 SERVER142 genunix: [ID 904073 kern.notice] done This message posted from opensolaris.org