W Sanders
2009-Jul-22 16:31 UTC
[zfs-discuss] When will shrink / evict be coming? With respect to drive upgrades ...
We have Thumper that we got at a good price from the Sun Educational Grant program (thank you Sun!) but it came populated with 500GB drives. The box will be used as a virtual tape library and general purpose NFS/iSCSI/Samba file server for users'' stuff. Probably, in about two years, we will want to reload it with whatever the big >1TB drive of the day is. This gives me a problem with respect to planning for the future, since currently one can''t shrink a zpool. I can think of a few approaches: 1) Initial configuration with two zpools. This lets us do the upgrade just before utilization hits 50%. We can migrate everyone off pool 1, destroy it, upgrade it, and either repeat the process for pool2 or join the pools together. 2) Replace with new, bigger disks, and slice them in half. Use one slice to rejoin the existing pool, and the second slice to start a new pool. 3) Unlikely: Mirror the existing zpool with some kind of external vdev. I''ve tested this - I actually mirrored a physical disk with a NFS vdev once, and to my amazement it worked. Unfortunately the Thumper is the biggest box we have right now, we don''t have any other devices with 18+TB of space. 3 1/2): Tape, like failure, is always an option. Either way with 1 or 2 we''re stuck with two pools on the same host, but since I have 40+ disks to spread the IO over, I''m not too worried. Option 4) If I just replace the 500GB disks one by one with 1 TB disks in an existing single zpool, will the zpool magically have twice as much space when I am done replacing the very last disk? I don''t have any way to test this. In the past I have been able to do this with *some* RAID5 array controllers. If you''ve been through this drill, let us know how you handled it. Thanks in advance, -W Sanders St Marys College of CA -- This message posted from opensolaris.org
Cindy.Swearingen at Sun.COM
2009-Jul-22 16:53 UTC
[zfs-discuss] When will shrink / evict be coming? With respect to drive upgrades ...
Hi-- With 40+ drives, you might consider two pools any way. If you want to use a ZFS root pool, some like this: - Mirrored ZFS root pool (2 x 500 GB drives) - Mirrored ZFS non-root pool for everything else Mirrored pools are flexible and provide good performance. See this site for more tips: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide Option 4 below is your best option. Depending on the Solaris release, ZFS will see the expanded space. If not, see this section: http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Changing_Disk_Capacity_Sizes Cindy On 07/22/09 10:31, W Sanders wrote:> We have Thumper that we got at a good price from the Sun Educational Grant program (thank you Sun!) but it came populated with 500GB drives. The box will be used as a virtual tape library and general purpose NFS/iSCSI/Samba file server for users'' stuff. Probably, in about two years, we will want to reload it with whatever the big >1TB drive of the day is. This gives me a problem with respect to planning for the future, since currently one can''t shrink a zpool. > > I can think of a few approaches: > > 1) Initial configuration with two zpools. This lets us do the upgrade just before utilization hits 50%. We can migrate everyone off pool 1, destroy it, upgrade it, and either repeat the process for pool2 or join the pools together. > > 2) Replace with new, bigger disks, and slice them in half. Use one slice to rejoin the existing pool, and the second slice to start a new pool. > > 3) Unlikely: Mirror the existing zpool with some kind of external vdev. I''ve tested this - I actually mirrored a physical disk with a NFS vdev once, and to my amazement it worked. Unfortunately the Thumper is the biggest box we have right now, we don''t have any other devices with 18+TB of space. > > 3 1/2): Tape, like failure, is always an option. > > Either way with 1 or 2 we''re stuck with two pools on the same host, but since I have 40+ disks to spread the IO over, I''m not too worried. > > Option 4) If I just replace the 500GB disks one by one with 1 TB disks in an existing single zpool, will the zpool magically have twice as much space when I am done replacing the very last disk? I don''t have any way to test this. In the past I have been able to do this with *some* RAID5 array controllers. > > If you''ve been through this drill, let us know how you handled it. Thanks in advance, > > -W Sanders > St Marys College of CA
Ross
2009-Jul-22 17:11 UTC
[zfs-discuss] When will shrink / evict be coming? With respect to drive upgrades ...
4. Yes :-D While you can''t shrink, you can already replace drives with bigger ones, and ZFS does increase the size at the end (although I think it needs an unmount/mount right now). However, even though you can simply pull one drive and replace it with a bigger one, that does degrade your array. So instead, depending on your needs, I''d suggest something like creating one pool of a bunch of raid-z2 vdevs, with 2-4 drives allocated as hot spares. That allows you in the future to replace the spare drives with new 2TB drives, then boot and run a ''zpool replace <old disk> <new disk>'' for all of the spares. That will switch the drives to the bigger size without degrading the array. Then when that finishes, remove the replaced drives (which are the new spares), and repeat. The reason I suggest up to 4 spares is that it''s likely to take some time to resilver, and even doing 4 at once you''ll need to do this 12 times to upgrade a Thumper. So if you are planning to upgrade, sacrificing that space now is probably a worthwhile investment. Sun have confirmed that 2TB drives will be supported, and probably 4TB ones too. I''ve also tested this out myself (although just with a single 1TB drive) on a Thumper. -- This message posted from opensolaris.org
W Sanders
2009-Jul-22 17:19 UTC
[zfs-discuss] When will shrink / evict be coming? With respect to drive upgrades ...
Thanks! Rats, we''re running GA u7 and not Opensolaris for now: # zpool set autoexpand=on pool (my pool is, in fact, named "pool") cannot set property for ''pool'': invalid property ''autoexpand'' We''re not in production yet, but I eventually have to install Veritas Netbackup on this thing (please feel free to pity me), and I don''t know if they are supporting Opensolaris yet. -w -- This message posted from opensolaris.org