Stephan Budach
2011-Nov-19 22:55 UTC
[zfs-discuss] After update to S11, zfs reports some disks as ''corrupted data''
Hi all, I am in the process of updating my SE11 servers to S11. On one server I am having two zpools made out of mirror vdevs and up to now none of these zpools have shown any error. However, after updating to S11 three disks - and unfortuanetly two of the same miorror vdev are shown as UNAVAIL due to ''corrupted data'': pool: obelixData id: 9610325806378085846 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: http://www.sun.com/msg/ZFS-8000-5E config: obelixData UNAVAIL insufficient replicas mirror-0 DEGRADED c9t2100001378AC02DDd0 ONLINE c9t2100001378AC02F4d0 UNAVAIL corrupted data mirror-1 UNAVAIL insufficient replicas c9t2100001378AC02DDd1 UNAVAIL corrupted data c9t2100001378AC02F4d1 UNAVAIL corrupted data mirror-2 ONLINE c9t2100001378AC02DDd2 ONLINE c9t2100001378AC02F4d2 ONLINE mirror-3 ONLINE c9t2100001378AC02DDd3 ONLINE c9t2100001378AC02F4d3 ONLINE mirror-4 ONLINE c9t2100001378AC02DDd5 ONLINE c9t2100001378AC02F4d5 ONLINE mirror-5 ONLINE c9t2100001378AC02DDd4 ONLINE c9t2100001378AC02F4d4 ONLINE mirror-6 ONLINE c9t2100001378AC02DDd6 ONLINE c9t2100001378AC02F4d6 ONLINE mirror-7 ONLINE c9t2100001378AC02DDd7 ONLINE c9t2100001378AC02F4d7 ONLINE mirror-8 ONLINE c9t2100001378AC02DDd8 ONLINE c9t2100001378AC02F4d8 ONLINE mirror-9 ONLINE c9t2100001378AC02DDd9 ONLINE c9t2100001378AC02F4d9 ONLINE mirror-10 ONLINE c9t2100001378AC02DDd10 ONLINE c9t2100001378AC02F4d10 ONLINE mirror-11 ONLINE c9t2100001378AC02DDd11 ONLINE c9t2100001378AC02F4d11 ONLINE mirror-12 ONLINE c9t2100001378AC02DDd12 ONLINE c9t2100001378AC02F4d12 ONLINE mirror-13 ONLINE c9t2100001378AC02DDd13 ONLINE c9t2100001378AC02F4d13 ONLINE mirror-14 ONLINE c9t2100001378AC02DDd14 ONLINE c9t2100001378AC02F4d14 ONLINE logs mirror-15 ONLINE c9t2100001378AC02D9d0 ONLINE c9t2100001378AC02BFd0 ONLINE So I reverted back to my saved SE11 BE and SE11 shows these disks as online and imnports this zpool without any issue. I have now a scrub running on this zpool, but I am curious, if anyone had ever experienced something similar. Thanks, budy -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111119/c1491d3a/attachment.html>
Stephan Budach
2011-Nov-21 19:02 UTC
[zfs-discuss] After update to S11, zfs reports some disks as ''corrupted data''
Phew? seems that the S11 update process replaced my modified qlc.conf with a standard one. In SE11 I had to lower the queue depth by setting max_execution_throttle to something lower than 16. Mostly since I am exposing 16 LUNs from each storage and this the qlc.driver flooded the storage controllers with I/Os. Cheers, budy