We have a Mirror setup in ZFS that''s 73GB (two internal disks on a sun fire v440). We currently are going to attach this system to an Equallogic Box, and will attach an ISCSCI LUN from the Equallogic box to the v440 of about 200gb. The Equallogic Box is configured as a Hardware Raid 50 (two hot spares for redundancy). My question is what''s the best approach to moving the ZFS data off the ZFS mirror to this LUN, or joining this LUN to ZFS but not have a ZFS mirror setup anymore because of the disk waste with the mirror setup. Whatever the approach is the current data cannot be lost, and the least downtime would help too. I thought the best approach might be to create the LUN as a 200 GB ZFS pool, and then do a ZFS export from the 73GB drive and a ZFS import on the 200 GB iscsci LUN. This LUN would still be setup as a RAID 50 even though its a ZFS file system so we would still have some redundancy and we could eventually grow this pool. Or would it be better to go UFS on the LUN and copy the data over? What would be the best approach for moving this data or configuring the disks involved? I''m open to any suggestions? This message posted from opensolaris.org
Robert Milkowski
2008-Jan-15 07:54 UTC
[zfs-discuss] Moving zfs to an iscsci equallogic LUN
Hello Kory, Tuesday, January 15, 2008, 5:47:31 AM, you wrote: KW> We have a Mirror setup in ZFS that''s 73GB (two internal disks on KW> a sun fire v440). We currently are going to attach this system to KW> an Equallogic Box, and will attach an ISCSCI LUN from the KW> Equallogic box to the v440 of about 200gb. The Equallogic Box is KW> configured as a Hardware Raid 50 (two hot spares for redundancy). KW> My question is what''s the best approach to moving the ZFS data KW> off the ZFS mirror to this LUN, or joining this LUN to ZFS but not KW> have a ZFS mirror setup anymore because of the disk waste with the KW> mirror setup. Whatever the approach is the current data cannot be KW> lost, and the least downtime would help too. I thought the best KW> approach might be to create the LUN as a 200 GB ZFS pool, and then KW> do a ZFS export from the 73GB drive and a ZFS import on the 200 GB KW> iscsci LUN. This LUN would still be setup as a RAID 50 even though KW> its a ZFS file system so we would still have some redundancy and KW> we could eventually grow this pool. Or would it be better to go KW> UFS on the LUN and copy the data over? KW> What would be the best approach for moving this data or KW> configuring the disks involved? I''m open to any suggestions? So you''ve got a mirror of two 73GB disks and you''ve got an iSCSI lun which is 200GB in size. You should be able to attach that iSCSI lun and create 3-way mirroring and once it''s synchronized you should be able to detach two 73GB disks. After you detach 73GB disks the pool should grow automatically. Make sure to use zpool attach and *NOT* zpool add. -- Best regards, Robert Milkowski mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
What would be the commands for the three way mirror or an example of what your describing. I thought the 200gb would have to be the same size to attach to the existing mirror and you would have to attach two LUN disks vs one LUN. Once it attaches it automatically reslivers or syncs the disk then if I wanted to I could remove the two 73 GB disks or still keep them in the pool and expand the pool later if I want? This message posted from opensolaris.org
Use zpool replace to swap one side of the mirror with the iscsi lun. -- mikee ----- Original Message ----- From: zfs-discuss-bounces at opensolaris.org <zfs-discuss-bounces at opensolaris.org> To: zfs-discuss at opensolaris.org <zfs-discuss at opensolaris.org> Sent: Tue Jan 15 08:46:40 2008 Subject: Re: [zfs-discuss] Moving zfs to an iscsci equallogic LUN What would be the commands for the three way mirror or an example of what your describing. I thought the 200gb would have to be the same size to attach to the existing mirror and you would have to attach two LUN disks vs one LUN. Once it attaches it automatically reslivers or syncs the disk then if I wanted to I could remove the two 73 GB disks or still keep them in the pool and expand the pool later if I want? This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Robert Milkowski
2008-Jan-15 14:51 UTC
[zfs-discuss] Moving zfs to an iscsci equallogic LUN
Hello Kory, Tuesday, January 15, 2008, 1:46:40 PM, you wrote: KW> What would be the commands for the three way mirror or an example KW> of what your describing. I thought the 200gb would have to be the KW> same size to attach to the existing mirror and you would have to KW> attach two LUN disks vs one LUN. Once it attaches it KW> automatically reslivers or syncs the disk then if I wanted to I KW> could remove the two 73 GB disks or still keep them in the pool KW> and expand the pool later if I want? KW> KW> No, if you are attaching another disk to mirror it doesn''t have to be the same size - it can be bigger, however you won''t be able to see all the space from the disk as long as it forms N-way mirror with smaller devices. And yes, if you add (attach) another disk to a mirror it will automatically resilver, and you can keep previous two disks - you will get 3-way mirror (you can create N-way mirror in general). Once you happy new disk is working properly you just remove (detach) two old disk and your pool automatically grows. Keep in mind that the reverse is not possible (yet). Below example showing your case. # mkfile 512m disk1 # mkfile 512m disk2 # mkfile 1024m disk3 # zpool create test mirror /root/disk1 /root/disk2 # zpool status pool: test state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 mirror ONLINE 0 0 0 /root/disk1 ONLINE 0 0 0 /root/disk2 ONLINE 0 0 0 # cp -rp /lib/ /test/ # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT test 504M 83.2M 421M 16% ONLINE - # # zpool attach test /root/disk2 /root/disk3 # zpool status pool: test state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress, 69.59% done, 0h0m to go config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 mirror ONLINE 0 0 0 /root/disk1 ONLINE 0 0 0 /root/disk2 ONLINE 0 0 0 /root/disk3 ONLINE 0 0 0 errors: No known data errors # Waiting for a re-silvering to complete. # zpool status pool: test state: ONLINE scrub: resilver completed with 0 errors on Tue Jan 15 14:41:13 2008 config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 mirror ONLINE 0 0 0 /root/disk1 ONLINE 0 0 0 /root/disk2 ONLINE 0 0 0 /root/disk3 ONLINE 0 0 0 errors: No known data errors # # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT test 504M 83.3M 421M 16% ONLINE - # # zpool detach test /root/disk1 # zpool detach test /root/disk2 # zpool status pool: test state: ONLINE scrub: resilver completed with 0 errors on Tue Jan 15 14:41:13 2008 config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 /root/disk3 ONLINE 0 0 0 errors: No known data errors # # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT test 1016M 83.2M 933M 8% ONLINE - # so we''ve migrated data from a 2-way mirror to a just one disk, live without unmounting file systems, etc. If your 3rd disk is already protected if you follow above procedure you will have a protected configuration all the time (however at the end you relay on disk3 built-in redundancy). -- Best regards, Robert Milkowski mailto:rmilkowski at task.gda.pl http://milek.blogspot.com