Thanks everyone who has tried to help..... this has gotten a bit crazier, I
removed the ''faulty'' drive and let the pool run in degraded
mode. It would appear that now another drive has decided to play up;
de-bash-4.0# zpool status
pool: data
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using ''zpool online'' or replace the
device with
''zpool replace''.
scrub: resilver completed after 2h35m with 0 errors on Wed Feb 17 13:48:16 2010
config:
NAME STATE READ WRITE CKSUM
data DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
c6t0d0 ONLINE 0 0 0
c6t1d0 ONLINE 0 0 0
c6t2d0 ONLINE 0 0 0
c6t3d0 ONLINE 0 0 0
c6t4d0 OFFLINE 0 0 0 366G resilvered
c6t5d0 ONLINE 0 0 0
errors: No known data errors
Now I''m transferring some data to the pool
vice r/s w/s Mr/s Mw/s wait actv svc_t %w %b s/w h/w trn tot
sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0
sd2 18.5 0.0 2.1 0.0 0.2 0.0 13.5 3 4 0 0 0 0
sd3 18.5 0.0 2.1 0.0 0.2 0.0 14.8 4 5 0 0 0 0
sd4 1.0 0.0 0.0 0.0 9.0 1.0 9999.9 100 100 0 0 0 0
sd5 19.5 0.0 2.1 0.0 0.2 0.0 11.9 3 4 0 0 0 0
sd6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0
sd7 18.5 0.0 2.1 0.0 0.3 0.1 22.7 8 8 0 0 0 0
sd8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 24 24 48
sd9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 24 24 48
extended device statistics ---- errors ---
device r/s w/s Mr/s Mw/s wait actv svc_t %w %b s/w h/w trn tot
sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0
sd2 0.5 0.0 0.0 0.0 0.0 0.0 15.6 0 1 0 0 0 0
sd3 0.5 0.0 0.0 0.0 0.0 0.0 33.5 0 2 0 0 0 0
sd4 0.5 0.0 0.0 0.0 9.0 1.0 19999.9 100 100 0 0 0 0
sd5 0.5 0.0 0.0 0.0 0.0 0.0 21.4 0 1 0 0 0 0
sd6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0
sd7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0
sd8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 24 24 48
sd9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 24 24 48
extended device statistics ---- errors ---
device r/s w/s Mr/s Mw/s wait actv svc_t %w %b s/w h/w trn tot
sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0
sd2 0.5 0.5 0.0 0.0 0.0 0.0 5.9 0 1 0 0 0 0
sd3 0.5 0.5 0.0 0.0 0.0 0.0 10.3 0 1 0 0 0 0
sd4 0.5 0.0 0.0 0.0 9.0 1.0 19999.8 100 100 0 0 0 0
sd5 0.5 0.5 0.0 0.0 0.0 0.0 11.1 0 1 0 0 0 0
sd6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0
sd7 0.5 0.5 0.0 0.0 0.0 0.0 8.2 0 1 0 0 0 0
sd8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 24 24 48
sd9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 24 24 48
extended device statistics ---- errors ---
device r/s w/s Mr/s Mw/s wait actv svc_t %w %b s/w h/w trn tot
sd0 0.5 0.0 0.0 0.0 0.0 0.0 1.7 0 0 0 0 0 0
sd2 6.5 16.0 0.0 0.7 0.3 0.1 15.5 5 6 0 0 0 0
sd3 6.0 7.5 0.0 0.7 0.4 0.1 33.8 8 8 0 0 0 0
sd4 0.5 0.0 0.0 0.0 9.0 1.0 19999.9 100 100 0 0 0 0
sd5 5.5 17.5 0.0 0.7 0.2 0.0 9.9 4 5 0 0 0 0
sd6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0
sd7 6.5 17.5 0.0 0.7 0.4 0.1 18.0 6 6 0 0 0 0
sd8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 24 24 48
sd9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 24 24 48
extended device statistics ---- errors ---
device r/s w/s Mr/s Mw/s wait actv svc_t %w %b s/w h/w trn tot
sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0
sd2 2.0 0.0 0.1 0.0 0.0 0.0 16.4 1 2 0 0 0 0
sd3 2.0 0.0 0.1 0.0 0.0 0.0 29.4 1 3 0 0 0 0
sd4 1.0 0.0 0.0 0.0 9.0 1.0 9999.9 100 100 0 0 0 0
sd5 2.0 0.0 0.1 0.0 0.0 0.0 28.4 1 4 0 0 0 0
sd6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0
sd7 2.0 0.0 0.1 0.0 0.0 0.0 22.1 1 3 0 0 0 0
sd8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 24 24 48
sd9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 24 24 48
Surely this is not a drive issue, this drive has never exhibited this behaviour
before, could it be indicative of;
1. ICH SATA chipset driver problem ?
2. Western Digital ''Green'' HDD problem ? (I have enabled TLER)
3. ZFS problem ?
I''m not sure I can trust this pool any more. I may add the
''offline'' drive back in to see if the
''problem'' moves back to it.
Incidentally, these huge service times only appear to happen for writes.
Cheers.
--
This message posted from opensolaris.org