I am playing with ZFS on a jetStor 516F with 9 1TB E-SATA drives. This is our first real tests with ZFS and I working on how to replace our HA-NFS ufs file systems with ZFS counterparts. One of the things I am concerned with is how do I replace a disk array/vdev in a pool? It appears that is not possible at the moment. For example, I have this array that I want to replace the drives in with bigger ones. I currently have 3 raidz vdevs and I am using about two thirds of the total space. So, to keep ahead of the curve, I want to replace the 1TB drvies with 1.5TB drives. Another example, would be that I have a pool with some older T3Bs and newer SE3511. I want to remove the T3Bs from the pool and replace them with an expansion tray on the SE3511. Any idea when I might be able to do this? Matt This message posted from opensolaris.org
On 02 August, 2007 - Matthew C Aycock sent me these 1,0K bytes:> I am playing with ZFS on a jetStor 516F with 9 1TB E-SATA drives. This > is our first real tests with ZFS and I working on how to replace our > HA-NFS ufs file systems with ZFS counterparts. One of the things I am > concerned with is how do I replace a disk array/vdev in a pool? It > appears that is not possible at the moment. > > For example, I have this array that I want to replace the drives in > with bigger ones. I currently have 3 raidz vdevs and I am using about > two thirds of the total space. So, to keep ahead of the curve, I want > to replace the 1TB drvies with 1.5TB drives.zpool replace mypool 1tbdevice 1.5tbdevice When all devices in a raidz are grown, the raidz grows automatically iirc (it does for mirrors at least).> Another example, would be that I have a pool with some older T3Bs and > newer SE3511. I want to remove the T3Bs from the pool and replace them > with an expansion tray on the SE3511.zpool replace mypool t3bdevice se3511device> Any idea when I might be able to do this?Long time ago.. You can replace a single device with another device.. what you can''t do at the moment is for example to replace a hwraid5 (single device) with a raidz (multiple devices) or replace 3 t3b''s with a single se3511.. For that, you need the evacuate/shrink thingie which I''ve heard ETAs around years end. /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
As a novice, I undestand that if you don''t have any redundancy between vdevs this is going to be a problem. Perhaps you can add mirroring to your existing pool and make it work that way? A pool made up of mirror pairs: {cyrus4:137} zpool status pool: ms2 state: ONLINE scrub: scrub completed with 0 errors on Sun Jul 22 00:47:51 2007 config: NAME STATE READ WRITE CKSUM ms2 ONLINE 0 0 0 mirror ONLINE 0 0 0 c4t600C0FF0000000000A7E0A0E6F8A1000d0 ONLINE 0 0 0 c4t600C0FF0000000000A7E8D1EA7178800d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c4t600C0FF0000000000A7E0A7219D78100d0 ONLINE 0 0 0 c4t600C0FF0000000000A7E8D7B3709D800d0 ONLINE 0 0 0 errors: No known data errors So remove a half of the mirror and replace it with a larger one. Wait for everything to synch up, then remove the other half and add a large one. Suddenly pool expands. Alternatively setup new arrrays on a 2nd server, and use zfs send and receive to duplicate data. This message posted from opensolaris.org