G''day, I''ve got a OpenSolaris server n95, that I use for media, serving. It''s uses a DQ35JOE motherboard, dual core, and I have my rpool mirrored on two IDE 40GB drives, and my media mirrored on 2 x 500GB SATA drives. I''ve got a few CIFS shares on the media drive, and I''m using MediaTomb to stream to my PS3. No problems at all, until today. I was at work (obviously not working too hard :) ), when I thought that I really should scrub my pools, since I hasn''t done it for awhile. So I SSHed into the box, and did a scrub on both pools. A few minutes later, I lost my SSH connection... uh oh, but not too worried, I thought that the ADSL must''ve gone down or something. Came home, and the server is in a reboot loop, kernel panic. Nuts... Booted into the LiveDVD of snv_95, no problem, set about scrubbing my rpool, everything is good, until I decide to import and start scrubbing my storage pool... kernel panic... Nuts... Removed the storage pool drives from the machine, no problem, boots up fine and starts scrubbing the rpool again. No problems. Decided to more the storage drives over to my desktop machine, try to import.... kernel panic... So, the trick is, how do I fix it? I''ve read a few posts, and I''ve seen other people with similar problems, but I have to admit I''m simply not smart enough to solve the problem, so, anyone got any ideas? Here''s some info that I hope prove useful. aldredmr at asmodeus:~/Desktop$ pfexec zpool import pool: storage id: 6933883927787501942 state: ONLINE status: The pool is formatted using an older on-disk version. action: The pool can be imported using its name or numeric identifier, though some features will not be available without an explicit ''zpool upgrade''. config: storage ONLINE mirror ONLINE c3t3d0 ONLINE c3t2d0 ONLINE aldredmr at asmodeus:~/Desktop$ zdb -uuu -e storage Uberblock magic = 0000000000bab10c version = 10 txg = 3818020 guid_sum = 6700303293925244073 timestamp = 1220003402 UTC = Fri Aug 29 17:50:02 2008 rootbp = [L0 DMU objset] 400L/200P DVA[0]=<0:6a00058e00:200> DVA[1]=<0:20000a8600:200> DVA[2]=<0:3800050600:200> fletcher4 lzjb LE contiguous birth=3818020 fill=170 cksum=8b56cdef9:38379d3cd95:b809c1c9bb15:197649b024bfd1 aldredmr at asmodeus:~/Desktop$ zdb -e -bb storage Traversing all blocks to verify nothing leaked ... No leaks (block sum matches space maps exactly) bp count: 3736040 bp logical: 484538716672 avg: 129693 bp physical: 484064542720 avg: 129566 compression: 1.00 bp allocated: 484259193344 avg: 129618 compression: 1.00 SPA allocated: 484259193344 used: 97.20% Blocks LSIZE PSIZE ASIZE avg comp %Total Type 105 1.11M 339K 1017K 9.7K 3.35 0.00 deferred free 2 32K 4K 12.0K 6.00K 8.00 0.00 object directory 2 1K 1K 3.00K 1.50K 1.00 0.00 object array 1 16K 1.50K 4.50K 4.50K 10.67 0.00 packed nvlist - - - - - - - packed nvlist size 1 16K 3.00K 9.00K 9.00K 5.33 0.00 bplist - - - - - - - bplist header - - - - - - - SPA space map header 373 2.14M 801K 2.35M 6.44K 2.73 0.00 SPA space map 3 40.0K 40.0K 40.0K 13.3K 1.00 0.00 ZIL intent log 552 8.62M 2.40M 4.82M 8.94K 3.60 0.00 DMU dnode 8 8K 4K 8.50K 1.06K 2.00 0.00 DMU objset - - - - - - - DSL directory 8 4K 4K 12.0K 1.50K 1.00 0.00 DSL directory child map 7 3.50K 3.50K 10.5K 1.50K 1.00 0.00 DSL dataset snap map 15 225K 25.0K 75.0K 5.00K 8.98 0.00 DSL props - - - - - - - DSL dataset - - - - - - - ZFS znode - - - - - - - ZFS V0 ACL 3.56M 451G 451G 451G 127K 1.00 100.00 ZFS plain file 1.55K 9.9M 1.51M 3.03M 1.95K 6.55 0.00 ZFS directory 7 3.50K 3.50K 7.00K 1K 1.00 0.00 ZFS master node 40 550K 87.0K 174K 4.35K 6.32 0.00 ZFS delete queue - - - - - - - zvol object - - - - - - - zvol prop - - - - - - - other uint8[] - - - - - - - other uint64[] 1 512 512 1.50K 1.50K 1.00 0.00 other ZAP - - - - - - - persistent error log 1 128K 10.0K 30.0K 30.0K 12.80 0.00 SPA history - - - - - - - SPA history offsets - - - - - - - Pool properties - - - - - - - DSL permissions 107 53.5K 53.5K 107K 1K 1.00 0.00 ZFS ACL - - - - - - - ZFS SYSACL 4 64K 4K 8K 2K 16.00 0.00 FUID table - - - - - - - FUID table size - - - - - - - DSL dataset next clones - - - - - - - scrub work queue 3.56M 451G 451G 451G 127K 1.00 100.00 Total I''ve checked my /var/adm/messages file, and found the following: Aug 29 17:37:29 asmodeus unix: [ID 836849 kern.notice] Aug 29 17:37:29 asmodeus ^Mpanic[cpu2]/thread=ffffff00087f0c80: Aug 29 17:37:29 asmodeus genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=ffffff00087effc0 addr=2a0 occ urred in module "unix" due to a NULL pointer dereference Aug 29 17:37:29 asmodeus unix: [ID 100000 kern.notice] Aug 29 17:37:29 asmodeus unix: [ID 839527 kern.notice] sched: Aug 29 17:37:29 asmodeus unix: [ID 753105 kern.notice] #pf Page fault Aug 29 17:37:29 asmodeus unix: [ID 532287 kern.notice] Bad kernel fault at addr=0x2a0 Aug 29 17:37:29 asmodeus unix: [ID 243837 kern.notice] pid=0, pc=0xfffffffffb842a1b, sp=0xffffff00087f00b8, eflags=0x10246 Aug 29 17:37:29 asmodeus unix: [ID 211416 kern.notice] cr0: 8005003b<pg,wp,ne,et,ts,mp,pe> cr4: 6f8<xmme,fxsr,pge,mce,pae,ps e,de> Aug 29 17:37:29 asmodeus unix: [ID 624947 kern.notice] cr2: 2a0 Aug 29 17:37:29 asmodeus unix: [ID 625075 kern.notice] cr3: 3400000 Aug 29 17:37:29 asmodeus unix: [ID 625715 kern.notice] cr8: c Aug 29 17:37:29 asmodeus unix: [ID 100000 kern.notice] Aug 29 17:37:29 asmodeus unix: [ID 592667 kern.notice] rdi: 2a0 rsi: 4 rdx: ffffff00087f0c80 Aug 29 17:37:29 asmodeus unix: [ID 592667 kern.notice] rcx: 2 r8: 1d0 r9: ff000000ff00 Aug 29 17:37:29 asmodeus unix: [ID 592667 kern.notice] rax: 0 rbx: 4 rbp: ffffff00087f0110 Aug 29 17:37:29 asmodeus unix: [ID 592667 kern.notice] r10: 43 r11: 1d0c0 r12: 2a0 Aug 29 17:37:29 asmodeus unix: [ID 592667 kern.notice] r13: 0 r14: 0 r15: ffffff01db281800 Aug 29 17:37:29 asmodeus unix: [ID 592667 kern.notice] fsb: 0 gsb: ffffff01caa58580 ds: 4b Aug 29 17:37:29 asmodeus unix: [ID 592667 kern.notice] es: 4b fs: 0 gs: 1c3 Aug 29 17:37:29 asmodeus unix: [ID 592667 kern.notice] trp: e err: 2 rip: fffffffffb842a1b Aug 29 17:37:29 asmodeus unix: [ID 592667 kern.notice] cs: 30 rfl: 10246 rsp: ffffff00087f00b8 Aug 29 17:37:29 asmodeus unix: [ID 266532 kern.notice] ss: 38 Aug 29 17:37:29 asmodeus unix: [ID 100000 kern.notice] Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087efea0 unix:die+c8 () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087effb0 unix:trap+13b9 () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087effc0 unix:cmntrap+e9 () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f0110 unix:mutex_enter+b () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f0130 zfs:zio_buf_alloc+28 () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f0170 zfs:zio_read_init+49 () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f01a0 zfs:zio_execute+7f () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f01e0 zfs:zio_wait+2e () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f0290 zfs:arc_read_nolock+739 () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f0330 zfs:arc_read+7d () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f0460 zfs:scrub_visitbp+141 () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f0570 zfs:scrub_visitbp+1bd () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f0680 zfs:scrub_visitbp+42c () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f0790 zfs:scrub_visitbp+1bd () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f08a0 zfs:scrub_visitbp+2ea () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f08f0 zfs:scrub_visit_rootbp+4e () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f0aa0 zfs:dsl_pool_scrub_sync+12c () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f0b10 zfs:dsl_pool_sync+158 () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f0bb0 zfs:spa_sync+254 () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f0c60 zfs:txg_sync_thread+226 () Aug 29 17:37:29 asmodeus genunix: [ID 655072 kern.notice] ffffff00087f0c70 unix:thread_start+8 () Aug 29 17:37:29 asmodeus unix: [ID 100000 kern.notice] Aug 29 17:37:29 asmodeus genunix: [ID 672855 kern.notice] syncing file systems... Aug 29 17:37:29 asmodeus genunix: [ID 904073 kern.notice] done Aug 29 17:37:30 asmodeus genunix: [ID 111219 kern.notice] dumping to /dev/dsk/c3t0d0s1, offset 429391872, content: kernel Aug 29 17:37:30 asmodeus ahci: [ID 405573 kern.info] NOTICE: ahci0: ahci_tran_reset_dport port 0 reset port Aug 29 17:37:33 asmodeus genunix: [ID 409368 kern.notice] ^M100% done: 120113 pages dumped, compression ratio 3.77, Aug 29 17:37:33 asmodeus genunix: [ID 851671 kern.notice] dump succeeded Any help would be appreciated. -- This message posted from opensolaris.org
Ok, I''ve managed to get around the kernel panic. aldredmr at asmodeus:~/Download$ pfexec mdb -kw Loading modules: [ unix genunix specfs dtrace cpu.generic uppc pcplusmp scsi_vhci zfs sd ip hook neti sctp arp usba uhci s1394 fctl md lofs random sppp ipc ptm fcip fcp cpc crypto logindmux ii nsctl sdbc ufs rdc nsmb sv ]> vdev_uberblock_compare+0x49/W 1vdev_uberblock_compare+0x49: 0xffffffff = 0x1> vdev_uberblock_compare+0x3b/W 1vdev_uberblock_compare+0x3b: 0xffffffff = 0x1> zfsvfs_setup+0x60/v 0xebzfsvfs_setup+0x60: 0x74 = 0xeb>This has let me import the pool, without the kernel panicing, and I''m doing a scrub on the pool now. The thing is, I don''t know what those commands do, could anyone enlighten me? -- This message posted from opensolaris.org
Andrew ?????:> hey Victor, > > Where would i find that? I''m still somewhat getting used to the > Solaris environment. /var/adm/messages doesn''t seem to show any Panic > info.. I only have remote access via SSH, so I hope I can do > something with dtrace to pull it.Do you have anything in /var/crash/<hostname> ? If yes, then do something like this and provide output: cd /var/crash/<hostname> echo "::status" | mdb -k <dump number> echo "::stack" | mdb -k <dump number> echo "::msgbuf -v" | mdb -k <dump number> victor
I woke up yesterday morning, only to discover my system kept rebooting.. It''s been running fine for the last while. I upgraded to snv 98 a couple weeks back (from 95), and had upgraded my RaidZ Zpool from version 11 to 13 for improved scrub performance. After some research it turned out that, on bootup, importing my 4tb raidZ array was causing the system to panic (similar to this OP''s error). I got that bypassed, and can now at least boot the system.. However, when I try anything (like mdb -kw), it advises me that there is no command line editing because: "mdb: no terminal data available for TERM=vt320. term init failed: command-line editing and prompt will not be available". This means I can''t really try what aldredmr had done in mdb, and I really don''t have any experience in it. I upgraded to snv_100 (November), but experiencing the exact same issues. If anyone has some insight, it would be greatly appreciated. Thanks -- This message posted from opensolaris.org
Andrew,> I woke up yesterday morning, only to discover my system kept > rebooting.. > > It''s been running fine for the last while. I upgraded to snv 98 a > couple weeks back (from 95), and had upgraded my RaidZ Zpool from > version 11 to 13 for improved scrub performance. > > After some research it turned out that, on bootup, importing my 4tb > raidZ array was causing the system to panic (similar to this OP''s > error). I got that bypassed, and can now at least boot the system.. > > However, when I try anything (like mdb -kw), it advises me that > there is no command line editing because: "mdb: no terminal data > available for TERM=vt320. term init failed: command-line editing and > prompt will not be available". This means I can''t really try what > aldredmr had done in mdb, and I really don''t have any experience in > it. I upgraded to snv_100 (November), but experiencing the exact > same issues > > If anyone has some insight, it would be greatly appreciated. ThanksI have the same problem SSH''ing in from my Mac OS X, which sets the TERM type to ''xterm-color'', also not supported. Do the following, depending on your default shell. and you should be all set. TERM=vt100; export TERM or setenv TERM vt100> > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discussJim
Thanks a lot! Google didn''t seem to cooperate as well as I had hoped. Still no dice on the import. I only have shell access on my Blackberry Pearl from where I am, so it''s kind of hard, but I''m managing.. I''ve tried the OP''s exact commands, and even trying to import array as ro, yet the system still wants to panic.. I really hope I don''t have to redo my array, and lose everything as I still have faith in ZFS... -- This message posted from opensolaris.org
Andrew, Andrew wrote:> Thanks a lot! Google didn''t seem to cooperate as well as I had hoped. > > > Still no dice on the import. I only have shell access on my > Blackberry Pearl from where I am, so it''s kind of hard, but I''m > managing.. I''ve tried the OP''s exact commands, and even trying to > import array as ro, yet the system still wants to panic.. I really > hope I don''t have to redo my array, and lose everything as I still > have faith in ZFS...could you please post a little bit more details - at least panic string and stack backtrace during panic. That would help to get an idea about what might went wrong. regards, victor
hey Victor, Where would i find that? I''m still somewhat getting used to the Solaris environment. /var/adm/messages doesn''t seem to show any Panic info.. I only have remote access via SSH, so I hope I can do something with dtrace to pull it. Thanks, Andrew -- This message posted from opensolaris.org
Not too sure if it''s much help. I enabled kernel pages and curproc.. Let me know if I need to enable "all" then. solaria crash # echo "::status" | mdb -k debugging live kernel (64-bit) on solaria operating system: 5.11 snv_98 (i86pc) solaria crash # echo "::stack" | mdb -k solaria crash # echo "::msgbuf -v" | mdb -k TIMESTAMP LOGCTL MESSAGE 2008 Nov 7 18:53:55 ffffff01c901dcf0 capacity = 1953525168 sectors 2008 Nov 7 18:53:55 ffffff01c901db70 /pci at 0,0/pci1022,9608 at 9/pci1095,7132 at 0 : 2008 Nov 7 18:53:55 ffffff01c901d9f0 SATA disk device at port 0 2008 Nov 7 18:53:55 ffffff01c901d870 model ST31000340AS 2008 Nov 7 18:53:55 ffffff01c901d6f0 firmware SD15 2008 Nov 7 18:53:55 ffffff01c901d570 serial number 2008 Nov 7 18:53:55 ffffff01c901d3f0 supported features: 2008 Nov 7 18:53:55 ffffff01c901d270 48-bit LBA, DMA, Native Command Queueing, SMART self-test 2008 Nov 7 18:53:55 ffffff01c901d0f0 SATA Gen1 signaling speed (1.5Gbps) 2008 Nov 7 18:53:55 ffffff01c901adf0 Supported queue depth 32, limited to 31 2008 Nov 7 18:53:55 ffffff01c901ac70 capacity = 1953525168 sectors 2008 Nov 7 18:53:55 ffffff01c901aaf0 /pci at 0,0/pci1022,9607 at 7/pci1095,7132 at 0 : 2008 Nov 7 18:53:55 ffffff01c901a970 SATA disk device at port 0 2008 Nov 7 18:53:55 ffffff01c901a7f0 model Maxtor 6L250S0 2008 Nov 7 18:53:55 ffffff01c901a670 firmware BANC1G10 2008 Nov 7 18:53:55 ffffff01c901a4f0 serial number 2008 Nov 7 18:53:55 ffffff01c901a370 supported features: 2008 Nov 7 18:53:55 ffffff01c901a2b0 48-bit LBA, DMA, Native Command Queueing, SMART self-test 2008 Nov 7 18:53:55 ffffff01c901a130 SATA Gen1 signaling speed (1.5Gbps) 2008 Nov 7 18:53:55 ffffff01c901a070 Supported queue depth 32, limited to 31 2008 Nov 7 18:53:55 ffffff01c9017ef0 capacity = 490234752 sectors 2008 Nov 7 18:53:55 ffffff01c9017d70 pseudo-device: ramdisk1024 2008 Nov 7 18:53:55 ffffff01c9017bf0 ramdisk1024 is /pseudo/ramdisk at 1024 2008 Nov 7 18:53:55 ffffff01c9017a70 NOTICE: e1000g0 registered 2008 Nov 7 18:53:55 ffffff01c90179b0 pcplusmp: pci8086,100e (e1000g) instance 0 vector 0x14 ioapic 0x2 intin 0x14 is bound to cpu 0 2008 Nov 7 18:53:55 ffffff01c90178f0 Intel(R) PRO/1000 Network Connection, Driver Ver. 5.2.12 2008 Nov 7 18:53:56 ffffff01c9017830 pseudo-device: lockstat0 2008 Nov 7 18:53:56 ffffff01c9017770 lockstat0 is /pseudo/lockstat at 0 2008 Nov 7 18:53:56 ffffff01c90176b0 sd6 at si31240: target 0 lun 0 2008 Nov 7 18:53:56 ffffff01c90175f0 sd6 is /pci at 0,0/pci1022,9606 at 6/pci1095,7132 at 0/disk at 0,0 2008 Nov 7 18:53:56 ffffff01c9017530 sd5 at si31242: target 0 lun 0 2008 Nov 7 18:53:56 ffffff01c9017470 sd5 is /pci at 0,0/pci1022,9608 at 9/pci1095,7132 at 0/disk at 0,0 2008 Nov 7 18:53:56 ffffff01c90173b0 sd4 at si31241: target 0 lun 0 2008 Nov 7 18:53:56 ffffff01c90172f0 sd4 is /pci at 0,0/pci1022,9607 at 7/pci1095,7132 at 0/disk at 0,0 2008 Nov 7 18:53:56 ffffff01c9017230 /pci at 0,0/pci1022,9607 at 7/pci1095,7132 at 0/disk at 0,0 (sd4) online 2008 Nov 7 18:53:56 ffffff01c9017170 /pci at 0,0/pci1022,9607 at 7/pci1095,7132 at 0 : 2008 Nov 7 18:53:56 ffffff01c90170b0 SATA disk device at port 1 2008 Nov 7 18:53:56 ffffff01c9087f30 model ST31000340AS 2008 Nov 7 18:53:56 ffffff01c9087e70 firmware SD15 2008 Nov 7 18:53:56 ffffff01c9087db0 serial number 2008 Nov 7 18:53:56 ffffff01c9087cf0 supported features: 2008 Nov 7 18:53:56 ffffff01c9087c30 48-bit LBA, DMA, Native Command Queueing, SMART self-test 2008 Nov 7 18:53:56 ffffff01c9087b70 SATA Gen1 signaling speed (1.5Gbps) 2008 Nov 7 18:53:56 ffffff01c9087ab0 Supported queue depth 32, limited to 31 2008 Nov 7 18:53:56 ffffff01c90879f0 capacity = 1953525168 sectors 2008 Nov 7 18:53:56 ffffff01c9087930 /pci at 0,0/pci1022,9606 at 6/pci1095,7132 at 0/disk at 0,0 (sd6) online 2008 Nov 7 18:53:56 ffffff01c9087870 /pci at 0,0/pci1022,9608 at 9/pci1095,7132 at 0/disk at 0,0 (sd5) online 2008 Nov 7 18:53:56 ffffff01c90877b0 /pci at 0,0/pci1022,9608 at 9/pci1095,7132 at 0 : 2008 Nov 7 18:53:56 ffffff01c90876f0 SATA disk device at port 1 2008 Nov 7 18:53:56 ffffff01c9087630 model ST31000340AS 2008 Nov 7 18:53:56 ffffff01c9087570 firmware SD15 2008 Nov 7 18:53:56 ffffff01c90874b0 serial number 2008 Nov 7 18:53:56 ffffff01c90873f0 supported features: 2008 Nov 7 18:53:56 ffffff01c9087330 48-bit LBA, DMA, Native Command Queueing, SMART self-test 2008 Nov 7 18:53:56 ffffff01c9087270 SATA Gen1 signaling speed (1.5Gbps) 2008 Nov 7 18:53:56 ffffff01c90871b0 Supported queue depth 32, limited to 31 2008 Nov 7 18:53:56 ffffff01c90870f0 capacity = 1953525168 sectors 2008 Nov 7 18:53:56 ffffff01c907feb0 /pci at 0,0/pci1022,9606 at 6/pci1095,7132 at 0 : 2008 Nov 7 18:53:56 ffffff01c907fdf0 SATA disk device at port 1 2008 Nov 7 18:53:56 ffffff01c907fd30 model ST31000340AS 2008 Nov 7 18:53:56 ffffff01c907fc70 firmware SD15 2008 Nov 7 18:53:56 ffffff01c907fbb0 serial number 2008 Nov 7 18:53:56 ffffff01c907faf0 supported features: 2008 Nov 7 18:53:56 ffffff01c907fa30 48-bit LBA, DMA, Native Command Queueing, SMART self-test 2008 Nov 7 18:53:56 ffffff01c907f970 SATA Gen1 signaling speed (1.5Gbps) 2008 Nov 7 18:53:56 ffffff01c907f8b0 Supported queue depth 32, limited to 31 2008 Nov 7 18:53:56 ffffff01c907f7f0 capacity = 1953525168 sectors 2008 Nov 7 18:53:56 ffffff01c907f730 pseudo-device: fcsm0 2008 Nov 7 18:53:56 ffffff01c907f670 fcsm0 is /pseudo/fcsm at 0 2008 Nov 7 18:53:56 ffffff01c907f5b0 sd7 at si31241: target 1 lun 0 2008 Nov 7 18:53:56 ffffff01c907f4f0 sd7 is /pci at 0,0/pci1022,9607 at 7/pci1095,7132 at 0/disk at 1,0 2008 Nov 7 18:53:56 ffffff01c907f430 sd8 at si31242: target 1 lun 0 2008 Nov 7 18:53:56 ffffff01c907f370 sd8 is /pci at 0,0/pci1022,9608 at 9/pci1095,7132 at 0/disk at 1,0 2008 Nov 7 18:53:56 ffffff01c907f2b0 sd9 at si31240: target 1 lun 0 2008 Nov 7 18:53:56 ffffff01c907f1f0 sd9 is /pci at 0,0/pci1022,9606 at 6/pci1095,7132 at 0/disk at 1,0 2008 Nov 7 18:53:56 ffffff01c907f130 pseudo-device: lofi0 2008 Nov 7 18:53:56 ffffff01c907f070 lofi0 is /pseudo/lofi at 0 2008 Nov 7 18:53:56 ffffff01c9186ef0 /pci at 0,0/pci1022,9607 at 7/pci1095,7132 at 0/disk at 1,0 (sd7) online 2008 Nov 7 18:53:56 ffffff01c9186e30 /pci at 0,0/pci1022,9606 at 6/pci1095,7132 at 0/disk at 1,0 (sd9) online 2008 Nov 7 18:53:56 ffffff01c9186d70 /pci at 0,0/pci1022,9608 at 9/pci1095,7132 at 0/disk at 1,0 (sd8) online 2008 Nov 7 18:53:56 ffffff01c9186cb0 pseudo-device: profile0 2008 Nov 7 18:53:56 ffffff01c9186bf0 profile0 is /pseudo/profile at 0 2008 Nov 7 18:53:56 ffffff01c9186b30 pseudo-device: systrace0 2008 Nov 7 18:53:56 ffffff01c91869b0 systrace0 is /pseudo/systrace at 0 2008 Nov 7 18:53:56 ffffff01c91868f0 pseudo-device: fbt0 2008 Nov 7 18:53:56 ffffff01c9186830 fbt0 is /pseudo/fbt at 0 2008 Nov 7 18:53:56 ffffff01c9186770 pseudo-device: sdt0 2008 Nov 7 18:53:56 ffffff01c91865f0 sdt0 is /pseudo/sdt at 0 2008 Nov 7 18:53:56 ffffff01c9186530 pseudo-device: fasttrap0 2008 Nov 7 18:53:56 ffffff01c9186470 fasttrap0 is /pseudo/fasttrap at 0 2008 Nov 7 18:53:56 ffffff01c91863b0 pseudo-device: power0 2008 Nov 7 18:53:56 ffffff01c9186230 power0 is /pseudo/power at 0 2008 Nov 7 18:53:56 ffffff01c9186170 pseudo-device: srn0 2008 Nov 7 18:53:56 ffffff01c91860b0 srn0 is /pseudo/srn at 0 2008 Nov 7 18:53:56 ffffff01c916ce70 pseudo-device: lx_systrace0 2008 Nov 7 18:53:56 ffffff01c916cdb0 lx_systrace0 is /pseudo/lx_systrace at 0 2008 Nov 7 18:53:56 ffffff01c916ccf0 pseudo-device: ucode0 2008 Nov 7 18:53:56 ffffff01c916cb70 ucode0 is /pseudo/ucode at 0 2008 Nov 7 18:53:56 ffffff01c916cab0 pseudo-device: vboxdrv0 2008 Nov 7 18:53:56 ffffff01c916c9f0 vboxdrv0 is /pseudo/vboxdrv at 0 2008 Nov 7 18:53:56 ffffff01c916c930 pseudo-device: ncall0 2008 Nov 7 18:53:56 ffffff01c916c870 ncall0 is /pseudo/ncall at 0 2008 Nov 7 18:53:56 ffffff01c916c7b0 pseudo-device: nsctl0 2008 Nov 7 18:53:56 ffffff01c916c6f0 nsctl0 is /pseudo/nsctl at 0 2008 Nov 7 18:53:56 ffffff01c916c630 pseudo-device: nsctl0 2008 Nov 7 18:53:56 ffffff01c916c570 nsctl0 is /pseudo/nsctl at 0 2008 Nov 7 18:53:56 ffffff01c916c4b0 pseudo-device: ii0 2008 Nov 7 18:53:56 ffffff01c916c3f0 ii0 is /pseudo/ii at 0 2008 Nov 7 18:53:56 ffffff01c916c270 pseudo-device: sdbc0 2008 Nov 7 18:53:56 ffffff01c916c1b0 sdbc0 is /pseudo/sdbc at 0 2008 Nov 7 18:53:56 ffffff01c9319eb0 pseudo-device: fssnap0 2008 Nov 7 18:53:56 ffffff01c9319df0 fssnap0 is /pseudo/fssnap at 0 2008 Nov 7 18:53:56 ffffff01c9319d30 @(#) rdc: built 20:12:01 Aug 31 2008 2008 Nov 7 18:53:56 ffffff01c9319c70 pseudo-device: rdc0 2008 Nov 7 18:53:56 ffffff01c9319af0 rdc0 is /pseudo/rdc at 0 2008 Nov 7 18:53:56 ffffff01c9319a30 pseudo-device: winlock0 2008 Nov 7 18:53:56 ffffff01c93198b0 winlock0 is /pseudo/winlock at 0 2008 Nov 7 18:53:56 ffffff01c93197f0 pseudo-device: pm0 2008 Nov 7 18:53:56 ffffff01c93195b0 pm0 is /pseudo/pm at 0 2008 Nov 7 18:53:57 ffffff01c93194f0 pseudo-device: pool0 2008 Nov 7 18:53:57 ffffff01c9319430 pool0 is /pseudo/pool at 0 2008 Nov 7 18:53:57 ffffff01c9319370 IP Filter: v4.1.9, running. 2008 Nov 7 18:53:57 ffffff01c93192b0 pseudo-device: nsmb0 2008 Nov 7 18:53:57 ffffff01c9319070 nsmb0 is /pseudo/nsmb at 0 2008 Nov 7 18:53:57 ffffff01c990bef0 sv Aug 31 2008 20:12:26 (revision 11.11, 11.11.0_5.11, 08.31.2008) 2008 Nov 7 18:53:57 ffffff01c990bd70 pseudo-device: sv0 2008 Nov 7 18:53:57 ffffff01c990bcb0 sv0 is /pseudo/sv at 0 2008 Nov 7 18:54:01 ffffff01c990bb30 dump on /dev/zvol/dsk/rpool/dump size 4096 MB 2008 Nov 7 18:54:03 ffffff01c990b770 NOTICE: e1000g0 link up, 1000 Mbps, full duplex 2008 Nov 7 18:54:16 ffffff01d0262d70 pcplusmp: asy (asy) instance 0 vector 0x4 ioapic 0x2 intin 0x4 is bound to cpu 1 2008 Nov 7 18:54:16 ffffff01d0262bf0 ISA-device: asy0 2008 Nov 7 18:54:16 ffffff01d0262b30 asy0 is /isa/asy at 1,3f8 2008 Nov 7 18:54:17 ffffff01c9017cb0 pseudo-device: dtrace0 2008 Nov 7 18:54:17 ffffff01d0262170 dtrace0 is /pseudo/dtrace at 0 2008 Nov 7 18:54:18 ffffff01d01a0670 NOTICE: vnic1007 registered 2008 Nov 7 18:54:18 ffffff01d01a07f0 NOTICE: vnic1008 registered 2008 Nov 7 18:54:45 ffffff01c9021530 xsvc0 at root: space 0 offset 0 2008 Nov 7 18:54:45 ffffff01d01a0bb0 xsvc0 is /xsvc at 0,0 2008 Nov 7 18:54:46 ffffff01c901a430 pseudo-device: devinfo0 2008 Nov 7 18:54:46 ffffff01c901a5b0 devinfo0 is /pseudo/devinfo at 0 -- This message posted from opensolaris.org
So I tried a few more things.. I think the combination of the following in /etc/system made a difference: set pcplusmp:apic_use_acpi=0 set sata:sata_max_queue_depth = 0x1 set zfs:zfs_recover=1 <<< I had this before set aok=1 <<< I had this before too I crossed my fingers, and it actually imported this time.. Somehow .. solaria ~ # zpool status pool: itank state: ONLINE scrub: scrub in progress for 0h7m, 2.76% done, 4h33m to go config: NAME STATE READ WRITE CKSUM itank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c12t1d0 ONLINE 0 0 0 c13t0d0 ONLINE 0 0 0 c11t0d0 ONLINE 0 0 0 c13t1d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 Running some scrubs on it now, and I HOPE everything is okay... Anything else you suggest I try before it''s considered stable? Thanks -- This message posted from opensolaris.org