What is the current answer regarding replacing HDDs in a raidz, one at a
time, with a larger HDD? The Best-Practises-Wiki seems to suggest it is
possible (but perhaps just for mirror, not raidz?)
I am currently running osol-b114.
I did this test with data files to simulate this situation;
# mkfile 1G disk0[12345]
-rw------T 1 root root 1073741824 May 23 09:19 disk01
-rw------T 1 root root 1073741824 May 23 09:19 disk02
-rw------T 1 root root 1073741824 May 23 09:20 disk03
-rw------T 1 root root 1073741824 May 23 09:20 disk04
-rw------T 1 root root 1073741824 May 23 09:20 disk05
# zpool create grow raidz /var/tmp/disk01 /var/tmp/disk02
/var/tmp/disk03 /var/tmp/disk04 /var/tmp/disk05
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
grow 4.97G 138K 4.97G 0% ONLINE -
# zfs create -o compression=on -o atime=off grow/fs1
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
grow 153K 3.91G 35.1K /grow
grow/fs1 33.6K 3.91G 33.6K /grow/fs1
# zpool status grow
NAME STATE READ WRITE CKSUM
grow ONLINE 0 0 0
raidz1 ONLINE 0 0 0
/var/tmp/disk01 ONLINE 0 0 0
/var/tmp/disk02 ONLINE 0 0 0
/var/tmp/disk03 ONLINE 0 0 0
/var/tmp/disk04 ONLINE 0 0 0
/var/tmp/disk05 ONLINE 0 0 0
---------------------------------------------------------
That is our starting position, raidz using 5 1GB disks, giving us a
total 3.91G file-system.
Now to replace each one at a time with a 2GB disk.
-rw------T 1 root root 2147483648 May 23 09:36 bigger_disk01
-rw------T 1 root root 2147483648 May 23 09:37 bigger_disk02
-rw------T 1 root root 2147483648 May 23 09:40 bigger_disk03
-rw------T 1 root root 2147483648 May 23 09:40 bigger_disk04
-rw------T 1 root root 2147483648 May 23 09:41 bigger_disk05
# zpool offline grow /var/tmp/disk01
# zpool replace grow /var/tmp/disk01 /var/tmp/bigger_disk01
# zpool status grow
pool: grow
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Sat May 23
09:43:51 2009
config:
NAME STATE READ WRITE CKSUM
grow ONLINE 0 0 0
raidz1 ONLINE 0 0 0
/var/tmp/bigger_disk01 ONLINE 0 0 0 1.04M
resilvered
/var/tmp/disk02 ONLINE 0 0 0
/var/tmp/disk03 ONLINE 0 0 0
/var/tmp/disk04 ONLINE 0 0 0
/var/tmp/disk05 ONLINE 0 0 0
Do the same for all 5 disks....
# zpool status grow
scrub: resilver completed after 0h0m with 0 errors on Sat May 23
09:46:28 2009
config:
NAME STATE READ WRITE CKSUM
grow ONLINE 0 0 0
raidz1 ONLINE 0 0 0
/var/tmp/bigger_disk01 ONLINE 0 0 0
/var/tmp/bigger_disk02 ONLINE 0 0 0
/var/tmp/bigger_disk03 ONLINE 0 0 0
/var/tmp/bigger_disk04 ONLINE 0 0 0
/var/tmp/bigger_disk05 ONLINE 0 0 0 1.04M
resilvered
I was somewhat it just be magical here, but unfortunately;
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
grow 4.97G 5.35M 4.96G 0% ONLINE -
It is still the same size. I would expect it to go to 9G.
---------------------------------------------------------------------
I did a few commands to see if you can tell it to make it happen. Scrub,
zfs unmount/mount, zpool upgrade, etc. No difference.
Then something peculiar happened. I tried to export it, and import it to
see if it helped;
# zpool export grow
# zpool import grow
cannot import ''grow'': no such pool available
And alas, "grow" is completely gone, and no amount of
"import" would see
it. Oh well.
--
Jorgen Lundman | <lundman at lundman.net>
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell)
Japan | +81 (0)3 -3375-1767 (home)