Lutz Schumann
2010-Jan-24 20:20 UTC
[zfs-discuss] Degrated pool menbers excluded from writes ?
Hello,
I''m testing with snv_131 (nexentacore 3 alpha 4). I did a bonnie
benchmark to my disks and pulled a disk whil benchmarking. Everything went
smoothly,however I found that the now degrated device is excluded from the
writes.
So this is my pool after I have pulled the disk....
pool: mypool
state: DEGRADED
status: One or more devices has been removed by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using ''zpool online'' or replace the
device with
''zpool replace''.
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
mypool DEGRADED 0 0 0
mirror-0 ONLINE 0 0 0
c0t4d0 ONLINE 0 0 0
c0t5d0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c0t6d0 ONLINE 0 0 0
c0t7d0 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
c0t8d0 ONLINE 0 0 0
c0t9d0 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
c0t10d0 ONLINE 0 0 0
c0t11d0 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
c0t12d0 ONLINE 0 0 0
c0t13d0 ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
c0t15d0 ONLINE 0 0 0
c0t16d0 ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
c0t17d0 ONLINE 0 0 0
c0t18d0 ONLINE 0 0 0
mirror-7 DEGRADED 0 0 0
c0t19d0 REMOVED 0 0 0
c0t20d0 ONLINE 0 0 0
mirror-8 ONLINE 0 0 0
c0t21d0 ONLINE 0 0 0
c0t22d0 ONLINE 0 0 0
mirror-9 ONLINE 0 0 0
c0t23d0 ONLINE 0 0 0
c0t24d0 ONLINE 0 0 0
mirror-10 ONLINE 0 0 0
c0t25d0 ONLINE 0 0 0
c0t26d0 ONLINE 0 0 0
And this is the I/O to the pool ....
capacity operations bandwidth
pool alloc free read write read write
------------ ----- ----- ----- ----- ----- -----
mypool 35.9G 9.93T 0 1.54K 201 191M
mirror 3.53G 924G 0 153 0 18.8M
c0t4d0 - - 0 151 0 18.8M
c0t5d0 - - 0 151 0 18.9M
mirror 3.50G 925G 0 154 0 19.0M
c0t6d0 - - 0 154 0 19.0M
c0t7d0 - - 0 154 0 19.0M
mirror 3.50G 924G 0 154 0 19.1M
c0t8d0 - - 0 155 0 19.1M
c0t9d0 - - 0 155 0 19.1M
mirror 3.51G 924G 0 155 0 19.1M
c0t10d0 - - 0 155 0 19.1M
c0t11d0 - - 0 155 0 19.1M
mirror 3.53G 924G 0 155 0 19.1M
c0t12d0 - - 0 155 0 19.1M
c0t13d0 - - 0 155 0 19.1M
mirror 3.51G 924G 0 153 0 18.9M
c0t15d0 - - 0 153 0 18.9M
c0t16d0 - - 0 153 0 18.9M
mirror 3.51G 924G 0 157 100 18.9M
c0t17d0 - - 0 155 0 18.9M
c0t18d0 - - 0 155 6.29K 18.9M
mirror 673M 927G 0 5 100 7.27K
c0t19d0 - - 0 0 0 0
c0t20d0 - - 0 3 6.29K 7.27K
mirror 3.65G 924G 0 169 0 20.1M
c0t21d0 - - 0 167 0 20.1M
c0t22d0 - - 0 167 0 20.2M
mirror 3.51G 924G 0 157 0 18.7M
c0t23d0 - - 0 156 0 18.8M
c0t24d0 - - 0 155 0 18.7M
mirror 3.53G 924G 0 158 0 18.9M
c0t25d0 - - 0 157 0 19.0M
c0t26d0 - - 0 156 0 18.9M
------------ ----- ----- ----- ----- ----- -----
syspool 1.24G 231G 1 0 4.37K 0
mirror 1.24G 231G 1 0 4.37K 0
c0t0d0s0 - - 0 0 0 0
c0t1d0s0 - - 0 0 0 0
------------ ----- ----- ----- ----- ----- -----
One can see that the degrated mirror is excluded from the writes.
I think this is expected behaviour right ?
(data protection over performance)
Regards,
Robert
--
This message posted from opensolaris.org
Bill Sommerfeld
2010-Jan-24 21:07 UTC
[zfs-discuss] Degrated pool menbers excluded from writes ?
On 01/24/10 12:20, Lutz Schumann wrote:> One can see that the degrated mirror is excluded from the writes. > > I think this is expected behaviour right ? > (data protection over performance)That''s correct. It will use the space if it needs to but it prefers to avoid "sick" top-level vdevs if there are healthy ones available. - Bill