Assuming I have a zpool which consists of a simple 2 disk mirror. How do I attach a third disk (disk3) to this zpool to mirror the existing data? Then split this mirror and remove disk0 and disk1, leaving a single disk zpool which consist of the new disk3. AKA. Online data migration. [root]# zpool status -v pool: apps state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM apps ONLINE 0 0 0 mirror ONLINE 0 0 0 /root/zfs/disk0 ONLINE 0 0 0 /root/zfs/disk1 ONLINE 0 0 0 errors: No known data errors The use case here is, we''ve implemented new Storage. The new (3rd) LUN is a RAID10 Hitachi SAN with the existing mirror being local SAS Disks. Back in the VxVM world, this would be done by mirroring the dg then splitting the mirror. I understand we are moving away from a ZFS mirror to a single stripe. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090326/41c87cf1/attachment.html>
Hi Matthew, Just attach disk3 to existing mirrored tlv Wait for resilvering to complete Dettach disk0 and disk1 This will leave you with only disk3 in your pool. You will loose ZFS redundancy fancy features (self healing, ...). # zpool create test mirror /export/disk0 /export/disk1 # zpool status pool: test state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 mirror ONLINE 0 0 0 /export/disk0 ONLINE 0 0 0 /export/disk1 ONLINE 0 0 0 errors: No known data errors # zpool attach test /export/disk1 /export/disk3 # zpool status pool: test state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Thu Mar 26 19:55:24 2009 config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 mirror ONLINE 0 0 0 /export/disk0 ONLINE 0 0 0 /export/disk1 ONLINE 0 0 0 /export/disk3 ONLINE 0 0 0 71.5K resilvered # zpool detach test /export/disk0 # zpool detach test /export/disk1 # zpool status pool: test state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Thu Mar 26 19:55:24 2009 config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 /export/disk3 ONLINE 0 0 0 71.5K resilvered errors: No known data errors F. On 03/26/09 08:20, Matthew Angelo wrote:> Assuming I have a zpool which consists of a simple 2 disk mirror. > > How do I attach a third disk (disk3) to this zpool to mirror the > existing data? Then split this mirror and remove disk0 and disk1, > leaving a single disk zpool which consist of the new disk3. AKA. > Online data migration. > > > > [root]# zpool status -v > pool: apps > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > apps ONLINE 0 0 0 > mirror ONLINE 0 0 0 > /root/zfs/disk0 ONLINE 0 0 0 > /root/zfs/disk1 ONLINE 0 0 0 > > errors: No known data errors > > > The use case here is, we''ve implemented new Storage. The new (3rd) LUN > is a RAID10 Hitachi SAN with the existing mirror being local SAS Disks. > Back in the VxVM world, this would be done by mirroring the dg then > splitting the mirror. I understand we are moving away from a ZFS > mirror to a single stripe. > > Thanks > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Francis, Thanks for confirming. That did the trick. I kept thinking I had to mirror at the highest level (zpool), then split. I actually did it in one less step than you mention by using replace instead of attach then detach but what you said is 100% correct. zpool replace /root/zfs/disk0 /root/zfs/disk3 zpool detach /root/zfs/disk1 Thanks again! On Thu, Mar 26, 2009 at 7:00 PM, Francois Napoleoni < Francois.Napoleoni at sun.com> wrote:> Hi Matthew, > > Just attach disk3 to existing mirrored tlv > Wait for resilvering to complete > Dettach disk0 and disk1 > This will leave you with only disk3 in your pool. > You will loose ZFS redundancy fancy features (self healing, ...). > > # zpool create test mirror /export/disk0 /export/disk1 > # zpool status > pool: test > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > test ONLINE 0 0 0 > mirror ONLINE 0 0 0 > /export/disk0 ONLINE 0 0 0 > /export/disk1 ONLINE 0 0 0 > > errors: No known data errors > # zpool attach test /export/disk1 /export/disk3 > # zpool status > pool: test > state: ONLINE > scrub: resilver completed after 0h0m with 0 errors on Thu Mar 26 19:55:24 > 2009 > config: > > NAME STATE READ WRITE CKSUM > test ONLINE 0 0 0 > mirror ONLINE 0 0 0 > /export/disk0 ONLINE 0 0 0 > /export/disk1 ONLINE 0 0 0 > /export/disk3 ONLINE 0 0 0 71.5K resilvered > # zpool detach test /export/disk0 > # zpool detach test /export/disk1 > # zpool status > pool: test > state: ONLINE > scrub: resilver completed after 0h0m with 0 errors on Thu Mar 26 19:55:24 > 2009 > config: > > NAME STATE READ WRITE CKSUM > test ONLINE 0 0 0 > /export/disk3 ONLINE 0 0 0 71.5K resilvered > > errors: No known data errors > > F. > > > > On 03/26/09 08:20, Matthew Angelo wrote: > >> Assuming I have a zpool which consists of a simple 2 disk mirror. >> How do I attach a third disk (disk3) to this zpool to mirror the existing >> data? Then split this mirror and remove disk0 and disk1, leaving a single >> disk zpool which consist of the new disk3. AKA. Online data migration. >> >> >> >> [root]# zpool status -v >> pool: apps >> state: ONLINE >> scrub: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> apps ONLINE 0 0 0 >> mirror ONLINE 0 0 0 >> /root/zfs/disk0 ONLINE 0 0 0 >> /root/zfs/disk1 ONLINE 0 0 0 >> >> errors: No known data errors >> >> >> The use case here is, we''ve implemented new Storage. The new (3rd) LUN >> is a RAID10 Hitachi SAN with the existing mirror being local SAS Disks. >> Back in the VxVM world, this would be done by mirroring the dg then >> splitting the mirror. I understand we are moving away from a ZFS mirror to >> a single stripe. >> >> Thanks >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090327/5ced9ed1/attachment.html>