Currently, the algorithm is approximately round-robin. We try to keep
all vdevs working at all times, with a slight bias towards those with
less capacity. So if you add a new disk to a pool whose vdevs are 70%
full, we''ll gradually schedule more work for the empty disk until all
the vdevs are again even.
This algorithm leaves a lot to be desired when there is one or more
disks misbehaving in a non-fatal manner. Currently, if you have a
single vdev which is slower than the rest, it will slow down operation
of the entire pool. This includes a failing device, which continues to
respond to requests, albeit very slowly. We have ideas for improvement
in this area, but have been preoccupied with some of the lower-hanging
performance wins. Eventually, we''ll want to take a number of factors
into account, including capacity, latency, past errors, etc.
- Eric
On Tue, Jul 18, 2006 at 08:49:28AM -0400, David Blacklock
wrote:> Hello,
>
> What is the access algorithm used within multi-component pools for a
> given pool, and does it change when one or more members of the pool
> become degraded ?
>
> examples:
>
> zpool create mtank mirror c1t0d0 c2t0d0 mirror c3t0d0 c4t0d0 mirror
> c5t0d0 c6t0d0
>
> or;
>
> zpool create ztank raidz c1t0d0 c2t0d0 c3t0d0 raidz c4t0d0 c5t0d0 c6t0d0
>
> As files are created on the filesystem within these pools, are they
> distributed round-robin accross the components, or do they stay with the
> first component till full then go to the next, or is some other
> technique used ?
>
> Then, when a component becomes degraded, c1t0d0 for example, are new
> files still created on the degraded component, or is that component
> skipped until no longer degraded or until it is needed; i.e. you run out
> of space on the other components that have good status ?
>
> -thanks,
> -Dave
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock