Leandro Vanden Bosch
2010-Apr-24 21:12 UTC
[zfs-discuss] zfs-discuss Digest, Vol 54, Issue 153
Confirmed then that the issue was with the WD10EARS.
I swapped it out with the old one and things look a lot better:
pool: datos
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h3m, 0.02% done, 256h30m to go
config:
NAME STATE READ WRITE CKSUM
datos DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
c12t1d0 ONLINE 0 0 0
replacing-1 DEGRADED 0 0 0
c12t0d0s0/o OFFLINE 0 0 0
c12t0d0 ONLINE 0 0 0 184M resilvered
c12t2d0 ONLINE 0 0 0
Look now that the ETC is about 256h and with the EARS was more than 2500h.
When doing scrubs on this pool I usually saw that initially the performance
sucks but after some time it increases significantly reaching 30-50 MB/s.
Something else I want to add. When I wanted to replace the disk this last
time at had some resistance from ZFS because of this:
leandro at alexia:~$ zpool status
pool: datos
state: DEGRADED
scrub: resilver completed after 0h0m with 0 errors on Sat Apr 24 17:34:03
2010
config:
NAME STATE READ WRITE CKSUM
datos DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
c12t1d0 ONLINE 0 0 0
replacing-1 DEGRADED 0 0 4
c12t0d0s0/o ONLINE 0 0 0 16.0M resilvered
c12t0d0 OFFLINE 0 0 0
c12t2d0 ONLINE 0 0 0
[] The pool had c12t0d0s0/o and just a simple replace wouldn''t work:
$ pfexec zpool replace datos c12t0d0
invalid vdev specification
use ''-f'' to override the following errors:
/dev/dsk/c12t0d0s0 is part of active ZFS pool datos. Please see zpool(1M).
[] The -f option neither worked:
$ pfexec zpool replace -f datos c12t0d0
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c12t0d0s0 is part of active ZFS pool datos. Please see zpool(1M).
[] No way to detach the intruder:
$ pfexec zpool detach datos c12t0d0s0
cannot detach c12t0d0s0: no such device in pool
[] Well, maybe not with that name:
$ pfexec zpool detach datos c12t0d0s0/o
$ zpool status datos
pool: datos
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using ''zpool online'' or replace the
device with
''zpool replace''.
scrub: resilver completed after 0h0m with 0 errors on Sat Apr 24 17:34:03
2010
config:
NAME STATE READ WRITE CKSUM
datos DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
c12t1d0 ONLINE 0 0 0
c12t0d0 OFFLINE 0 0 0
c12t2d0 ONLINE 0 0 0
errors: No known data errors
[] Now the replace will work:
$ pfexec zpool replace datos c12t0d0
$ zpool status datos
pool: datos
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 0.00% done, 177h22m to go
config:
NAME STATE READ WRITE CKSUM
datos DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
c12t1d0 ONLINE 0 0 0
replacing-1 DEGRADED 0 0 0
c12t0d0s0/o OFFLINE 0 0 0
c12t0d0 ONLINE 0 0 0 19.5M resilvered
c12t2d0 ONLINE 0 0 0
errors: No known data errors
[] By the time I''m writing this last lines the performance already
improved:
$ zpool status datos
pool: datos
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h23m, 0.30% done, 126h14m to go
config:
NAME STATE READ WRITE CKSUM
datos DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
c12t1d0 ONLINE 0 0 0
replacing-1 DEGRADED 0 0 0
c12t0d0s0/o OFFLINE 0 0 0
c12t0d0 ONLINE 0 0 0 2.34G resilvered
c12t2d0 ONLINE 0 0 0
errors: No known data errors
$ zpool iostat datos 5
capacity operations bandwidth
pool alloc free read write read write
----------------- ----- ----- ----- ----- ----- -----
datos 2.27T 460G 134 1 9.62M 7.75K
raidz1 2.27T 460G 134 1 9.62M 7.75K
c12t1d0 - - 118 0 4.80M 0
replacing - - 0 135 0 4.82M
c12t0d0s0/o - - 0 0 0 0
c12t0d0 - - 0 132 0 4.82M
c12t2d0 - - 106 0 4.84M 0
----------------- ----- ----- ----- ----- ----- -----
Thanks for reading. :)
Leandro.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100424/ee231031/attachment.html>