We have a weird issue with our ZFS pool and COMSTAR. The pool shows online with no errors, everything looks good but when we try to access zvols shared out with COMSTAR, windows reports that the devices have bad blocks. Everything has been working great until last night and no changes have been done to this system in weeks. We are really hoping we can get our data back from this system. The 3 volumes that start with CV are shared with a server called CV, likewise with DPM. The drives are connected to the opensolaris box with Fibrechannel and we are also using Fibrechannel in target mode for the host side. opensolaris b134 admin at COMSTAR2:~$ zpool status pool: pool_1 state: ONLINE scrub: scrub in progress for 0h0m, 0.01% done, 93h49m to go config: NAME STATE READ WRITE CKSUM pool_1 ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 c12t60050CC000F01A8E000000000000007Dd0 ONLINE 0 0 0 c12t60050CC000F01A8E000000000000007Ed0 ONLINE 0 0 0 c12t60050CC000F01A8E000000000000007Fd0 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000080d0 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000081d0 ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000082d0 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000083d0 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000084d0 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000085d0 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000086d0 ONLINE 0 0 0 raidz2-2 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000087d0 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000088d0 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000089d0 ONLINE 0 0 0 c12t60050CC000F01A8E000000000000008Ad0 ONLINE 0 0 0 c12t60050CC000F01A8E000000000000008Bd0 ONLINE 0 0 0 raidz2-3 ONLINE 0 0 0 c12t60050CC000F01A8E000000000000008Cd0 ONLINE 0 0 0 c12t60050CC000F01A8E000000000000008Dd0 ONLINE 0 0 0 c12t60050CC000F01A8E000000000000008Ed0 ONLINE 0 0 0 c12t60050CC000F01A8E000000000000008Fd0 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000090d0 ONLINE 0 0 0 raidz2-4 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000091d0 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000092d0 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000093d0 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000094d0 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000095d0 ONLINE 0 0 0 raidz2-5 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000096d0 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000097d0 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000098d0 ONLINE 0 0 0 c12t60050CC000F01A8E0000000000000099d0 ONLINE 0 0 0 c12t60050CC000F01A8E000000000000009Ad0 ONLINE 0 0 0 raidz2-6 ONLINE 0 0 0 c12t60050CC000F01A8E000000000000009Bd0 ONLINE 0 0 0 c12t60050CC000F01A8E000000000000009Cd0 ONLINE 0 0 0 c12t60050CC000F01A8E000000000000009Dd0 ONLINE 0 0 0 c12t60050CC000F01A8E000000000000009Ed0 ONLINE 0 0 0 c12t60050CC000F01A8E000000000000009Fd0 ONLINE 0 0 0 raidz2-7 ONLINE 0 0 0 c12t60050CC000F01A8E00000000000000A0d0 ONLINE 0 0 0 c12t60050CC000F01A8E00000000000000A1d0 ONLINE 0 0 0 c12t60050CC000F01A8E00000000000000A2d0 ONLINE 0 0 0 c12t60050CC000F01A8E00000000000000A3d0 ONLINE 0 0 0 c12t60050CC000F01A8E00000000000000A4d0 ONLINE 0 0 0 raidz2-8 ONLINE 0 0 0 c12t60050CC000F01A8E00000000000000A5d0 ONLINE 0 0 0 c12t60050CC000F01A8E00000000000000A6d0 ONLINE 0 0 0 c12t60050CC000F01A8E00000000000000A7d0 ONLINE 0 0 0 c12t60050CC000F01A8E00000000000000A8d0 ONLINE 0 0 0 c12t60050CC000F01A8E00000000000000A9d0 ONLINE 0 0 0 raidz2-9 ONLINE 0 0 0 c12t60050CC000F01A8E00000000000000AAd0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000040d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000041d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000042d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000043d0 ONLINE 0 0 0 raidz2-10 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000044d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000045d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000046d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000047d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000048d0 ONLINE 0 0 0 raidz2-11 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000049d0 ONLINE 0 0 0 c12t60050CC000F01AC6000000000000004Ad0 ONLINE 0 0 0 c12t60050CC000F01AC6000000000000004Bd0 ONLINE 0 0 0 c12t60050CC000F01AC6000000000000004Cd0 ONLINE 0 0 0 c12t60050CC000F01AC6000000000000004Dd0 ONLINE 0 0 0 raidz2-12 ONLINE 0 0 0 c12t60050CC000F01AC6000000000000004Ed0 ONLINE 0 0 0 c12t60050CC000F01AC6000000000000004Fd0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000050d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000051d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000052d0 ONLINE 0 0 0 raidz2-13 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000053d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000054d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000055d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000056d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000057d0 ONLINE 0 0 0 raidz2-14 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000058d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000059d0 ONLINE 0 0 0 c12t60050CC000F01AC6000000000000005Ad0 ONLINE 0 0 0 c12t60050CC000F01AC6000000000000005Bd0 ONLINE 0 0 0 c12t60050CC000F01AC6000000000000005Cd0 ONLINE 0 0 0 raidz2-15 ONLINE 0 0 0 c12t60050CC000F01AC6000000000000005Dd0 ONLINE 0 0 0 c12t60050CC000F01AC6000000000000005Ed0 ONLINE 0 0 0 c12t60050CC000F01AC6000000000000005Fd0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000060d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000061d0 ONLINE 0 0 0 raidz2-16 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000062d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000063d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000064d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000065d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000066d0 ONLINE 0 0 0 raidz2-17 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000067d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000068d0 ONLINE 0 0 0 c12t60050CC000F01AC60000000000000069d0 ONLINE 0 0 0 c12t60050CC000F01AC6000000000000006Ad0 ONLINE 0 0 0 c12t60050CC000F01AC6000000000000006Bd0 ONLINE 0 0 0 spares c12t60050CC000F01A8E00000000000000ABd0 AVAIL c12t60050CC000F01A8E00000000000000ACd0 AVAIL admin at COMSTAR2:~# zfs list NAME USED AVAIL REFER MOUNTPOINT pool_1 47.2T 348K 37.5K /pool_1 pool_1/CV_BACKUPS 212G 348K 212G - pool_1/CV_DEDUP 18.7T 348K 18.7T - pool_1/CV_FULL 9.36T 348K 9.36T - pool_1/DPM_FULL 18.9T 348K 18.9T - -- This message posted from opensolaris.org
On Oct 8, 2010, at 2:06 AM, Wolfraider wrote:> We have a weird issue with our ZFS pool and COMSTAR. The pool shows online with no errors, everything looks good but when we try to access zvols shared out with COMSTAR, windows reports that the devices have bad blocks. Everything has been working great until last night and no changes have been done to this system in weeks. We are really hoping we can get our data back from this system. The 3 volumes that start with CV are shared with a server called CV, likewise with DPM. > The drives are connected to the opensolaris box with Fibrechannel and we are also using Fibrechannel in target mode for the host side.Can you share a sample of the windows report that indicates bad blocks, along with a review of the Windows configuration information that associates at least one instance of reported bad blocks, back to the LU that is configured under COMSTAR? While reviewing the Windows configuration data, look for other indications of errant status. Also the results of the following commands: stmfadm list-lu -v (for relevant LUs) zfs list (for relevant ZVOLS) Have you reviewed the system log files on both Windows and Solaris for errant status. You may also want to take a look at the internal COMSTAR tracing buffer, right after an attempt to access this LUs from Windows, and then review the buffer log for any errors reported. From a root-user, issue: echo "*stmf_trace_buf/s" | mdb -k - Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101008/6845b5d0/attachment.html>