A few tests have demonstrated a difference in how ZFS distributes data in a pool
composed of multiple mirror devices versus a pool composed of multiple RAID-Z
devices. In both cases the question centers on how ZFS distributes data when one
of the top level devices in a pool is larger than the others.
In a pool composed of mirrors, if one mirror device has more capacity than the
rest, new data is distributed in a manner that appears to be proportional to the
size of each device.
In a pool composed of RAID-Z devices, again, of differing size, but where each
RAID-Z device uses the same number of disks, ZFS appears to distribute data
evenly among the RAID-Z devices. If one RAID-Z device is comprised of more disks
than the others, it then receives more data than the others.
These questions arise:
What is the logic used to decide were to store new data in a pool with devices
of dissimilar size, or in the case of RAID-Z, of dissimilar numbers of member
disks?
What difference in this logic exists between pools composed of mirrors and pools
composed of RAID-Z devices?
What accounts for the difference in behavior in RAID-Z devices depending on
whether all devices use the same number or different number of disks, as opposed
to a simple difference in capacity?
Test results:
# zpool create mir_test mirror c0t226000C0FFA7C140d9 c1t216000C0FF87C140d9
mirror c0t226000C0FFA7C140d10 c1t216000C0FF87C140d10 mirror
c0t226000C0FFA7C140d25 c1t216000C0FF87C140d25
# zpool iostat -v mir_test
capacity operations bandwidth
pool used avail read write read write
-------------------------- ----- ----- ----- ----- ----- -----
mir_test 34K 24.2G 0 2 0 5.12K
mirror 0 7.75G 0 0 0 0
c0t226000C0FFA7C140d9 - - 0 3 0 91.9K
c1t216000C0FF87C140d9 - - 0 2 0 91.9K
mirror 0 7.75G 0 0 0 0
c0t226000C0FFA7C140d10 - - 0 2 0 91.9K
c1t216000C0FF87C140d10 - - 0 2 0 91.9K
mirror 34K 8.75G 0 2 0 5.12K
c0t226000C0FFA7C140d25 - - 0 5 0 97.3K
c1t216000C0FF87C140d25 - - 0 5 0 97.3K
-------------------------- ----- ----- ----- ----- ----- -----
# mkfile 6g /mir_test/file_6g
# zpool iostat -v mir_test
capacity operations bandwidth
pool used avail read write read write
-------------------------- ----- ----- ----- ----- ----- -----
mir_test 6.00G 18.2G 0 216 850 26.6M
mirror 1.96G 5.79G 0 70 53 8.69M
c0t226000C0FFA7C140d9 - - 0 70 283 8.70M
c1t216000C0FF87C140d9 - - 0 70 566 8.70M
mirror 1.96G 5.79G 0 71 17 8.69M
c0t226000C0FFA7C140d10 - - 0 70 283 8.69M
c1t216000C0FF87C140d10 - - 0 70 0 8.69M
mirror 2.08G 6.67G 0 75 779 9.19M
c0t226000C0FFA7C140d25 - - 0 74 1.94K 9.20M
c1t216000C0FF87C140d25 - - 0 74 283 9.20M
-------------------------- ----- ----- ----- ----- ----- -----
# zpool destroy mir_test
# zpool create -f rz_test1 raidz c0t226000C0FFA7C140d9 c0t226000C0FFA7C140d10
c0t226000C0FFA7C140d11 raidz c1t216000C0FF87C140d9 c1t216000C0FF87C140d10
c1t216000C0FF87C140d11 raidz c0t226000C0FFA7C140d25 c0t226000C0FFA7C140d26
c0t226000C0FFA7C140d27
# zpool iostat -v rz_test1
capacity operations bandwidth
pool used avail read write read write
-------------------------- ----- ----- ----- ----- ----- -----
rz_test1 60K 73.0G 0 2 0 4.39K
raidz 0 23.4G 0 0 0 0
c0t226000C0FFA7C140d9 - - 0 2 6.00K 77.4K
c0t226000C0FFA7C140d10 - - 0 2 6.00K 77.4K
c0t226000C0FFA7C140d11 - - 0 2 0 77.4K
raidz 0 23.4G 0 0 0 0
c1t216000C0FF87C140d9 - - 0 2 6.00K 77.6K
c1t216000C0FF87C140d10 - - 0 2 6.00K 77.6K
c1t216000C0FF87C140d11 - - 0 2 0 77.6K
raidz 60K 26.2G 0 2 0 4.39K
c0t226000C0FFA7C140d25 - - 0 4 6.00K 79.9K
c0t226000C0FFA7C140d26 - - 0 3 0 79.5K
c0t226000C0FFA7C140d27 - - 0 4 0 79.8K
-------------------------- ----- ----- ----- ----- ----- -----
# mkfile 6g /rz_test1/file_6g
# zpool iostat -v rz_test1
capacity operations bandwidth
pool used avail read write read write
-------------------------- ----- ----- ----- ----- ----- -----
rz_test1 9.00G 64.0G 0 220 990 27.0M
raidz 3.00G 20.4G 0 73 107 8.99M
c0t226000C0FFA7C140d9 - - 0 37 503 4.50M
c0t226000C0FFA7C140d10 - - 0 37 1.33K 4.50M
c0t226000C0FFA7C140d11 - - 0 37 862 4.50M
raidz 3.00G 20.4G 0 73 215 8.98M
c1t216000C0FF87C140d9 - - 0 37 1.05K 4.50M
c1t216000C0FF87C140d10 - - 0 37 1.33K 4.50M
c1t216000C0FF87C140d11 - - 0 37 862 4.50M
raidz 3.00G 23.2G 0 73 667 8.99M
c0t226000C0FFA7C140d25 - - 0 37 1.61K 4.50M
c0t226000C0FFA7C140d26 - - 0 37 575 4.50M
c0t226000C0FFA7C140d27 - - 0 37 575 4.50M
-------------------------- ----- ----- ----- ----- ----- -----
# zpool destroy rz_test1
# zpool create -f rz_test2 raidz c0t226000C0FFA7C140d9 c0t226000C0FFA7C140d10
c0t226000C0FFA7C140d11 raidz c1t216000C0FF87C140d9 c1t216000C0FF87C140d10
c1t216000C0FF87C140d11
# zpool add -f rz_test2 raidz c0t226000C0FFA7C140d25 c0t226000C0FFA7C140d26
c0t226000C0FFA7C140d27 c1t216000C0FF87C140d27
# zpool iostat -v rz_test2
capacity operations bandwidth
pool used avail read write read write
-------------------------- ----- ----- ----- ----- ----- -----
rz_test2 151K 81.7G 0 2 0 5.28K
raidz 34K 23.4G 0 0 0 1.08K
c0t226000C0FFA7C140d9 - - 0 1 2.31K 39.8K
c0t226000C0FFA7C140d10 - - 0 1 2.31K 39.8K
c0t226000C0FFA7C140d11 - - 0 1 2.31K 39.8K
raidz 117K 23.4G 0 1 0 4.20K
c1t216000C0FF87C140d9 - - 0 2 2.31K 41.6K
c1t216000C0FF87C140d10 - - 0 2 2.31K 41.5K
c1t216000C0FF87C140d11 - - 0 2 2.31K 41.6K
raidz 0 35G 0 0 0 0
c0t226000C0FFA7C140d25 - - 0 5 9.71K 204K
c0t226000C0FFA7C140d26 - - 0 4 9.71K 204K
c0t226000C0FFA7C140d27 - - 0 4 9.71K 204K
c1t216000C0FF87C140d27 - - 0 5 0 204K
-------------------------- ----- ----- ----- ----- ----- -----
# mkfile 6g /rz_test2/file_6g
# zpool iostat -v rz_test2
capacity operations bandwidth
pool used avail read write read write
-------------------------- ----- ----- ----- ----- ----- -----
rz_test2 8.64G 73.1G 0 197 857 24.1M
raidz 2.86G 20.5G 0 62 566 7.66M
c0t226000C0FFA7C140d9 - - 0 31 1.44K 3.84M
c0t226000C0FFA7C140d10 - - 0 31 706 3.84M
c0t226000C0FFA7C140d11 - - 0 31 964 3.84M
raidz 2.86G 20.5G 0 64 130 7.66M
c1t216000C0FF87C140d9 - - 0 33 1.44K 3.84M
c1t216000C0FF87C140d10 - - 0 33 1.19K 3.84M
c1t216000C0FF87C140d11 - - 0 33 964 3.84M
raidz 2.93G 32.1G 0 83 187 10.3M
c0t226000C0FFA7C140d25 - - 0 42 826 3.45M
c0t226000C0FFA7C140d26 - - 0 42 826 3.45M
c0t226000C0FFA7C140d27 - - 0 42 1.10K 3.45M
c1t216000C0FF87C140d27 - - 0 42 0 3.45M
-------------------------- ----- ----- ----- ----- ----- -----
#
Thanks,
Jeff Ferreira
This message posted from opensolaris.org