Walter Faleiro
2007-Dec-06 19:05 UTC
[zfs-discuss] Moving ZFS file system to a different system
Hi All, We are currently a hardware issue with our zfs file server hence the file system is unusable. We are planning to move it to a different system. The setup on the file server when it was running was bash-3.00# zpool status pool: store1 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM backup ONLINE 0 0 0 c1t2d1 ONLINE 0 0 0 c1t2d2 ONLINE 0 0 0 c1t2d3 ONLINE 0 0 0 c1t2d4 ONLINE 0 0 0 c1t2d5 ONLINE 0 0 0 errors: No known data errors pool: store2 state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using ''zpool clear'' or replace the device with ''zpool replace''. see: http://www.sun.com/msg/ZFS-8000-9P scrub: none requested config: NAME STATE READ WRITE CKSUM store ONLINE 0 0 1 c1t3d0 ONLINE 0 0 0 c1t3d1 ONLINE 0 0 0 c1t3d2 ONLINE 0 0 1 c1t3d3 ONLINE 0 0 0 c1t3d4 ONLINE 0 0 0 errors: No known data errors The store1 was a external raid device with slice configured to boot the system+swap and the remaining disk space configured for use with zfs. The store2 was a similar external raid device which had all slices configured for use for zfs. Since both are scsi raid devices, we are thinking of booting up the former using a different SUN Box. Are there some precautions to be taken to avoid any data loss? Thanks, --W -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20071206/737d138f/attachment.html>
Hello Walter, Thursday, December 6, 2007, 7:05:54 PM, you wrote: > Hi All, We are currently a hardware issue with our zfs file server hence the file system is unusable. We are planning to move it to a different system. The setup on the file server when it was running was bash-3.00# zpool status pool: store1 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM backup ONLINE 0 0 0 c1t2d1 ONLINE 0 0 0 c1t2d2 ONLINE 0 0 0 c1t2d3 ONLINE 0 0 0 c1t2d4 ONLINE 0 0 0 c1t2d5 ONLINE 0 0 0 errors: No known data errors pool: store2 state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using ''zpool clear'' or replace the device with ''zpool replace''. see: http://www.sun.com/msg/ZFS-8000-9P scrub: none requested config: NAME STATE READ WRITE CKSUM store ONLINE 0 0 1 c1t3d0 ONLINE 0 0 0 c1t3d1 ONLINE 0 0 0 c1t3d2 ONLINE 0 0 1 c1t3d3 ONLINE 0 0 0 c1t3d4 ONLINE 0 0 0 errors: No known data errors The store1 was a external raid device with slice configured to boot the system+swap and the remaining disk space configured for use with zfs. The store2 was a similar external raid device which had all slices configured for use for zfs. Since both are scsi raid devices, we are thinking of booting up the former using a different SUN Box. Are there some precautions to be taken to avoid any data loss? Thanks, --W Just make sure the external storage is not connected to both hosts at the same time. Once you connect it to another host simply import both pools with -f (as pool wasn''t cleanly exported I guess). Please also notice that you''ve encountered one uncorrectable error in store2 pool. Well, actually it looks like it was corrected judging from the message. IIRC it''s a known bug (should have been already corrected) - metadata cksum error propagates to top level vdev unnecessarily. -- Best regards, Robert Milkowski mailto:rmilkowski@task.gda.pl http://milek.blogspot.com _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Walter Faleiro
2007-Dec-10 05:11 UTC
[zfs-discuss] Moving ZFS file system to a different system
Hi Robert, Thanks it worked like a charm. --Walter On Dec 7, 2007 7:33 AM, Robert Milkowski <rmilkowski at task.gda.pl> wrote:> Hello Walter, > > > Thursday, December 6, 2007, 7:05:54 PM, you wrote: > > > > > > Hi All, > > We are currently a hardware issue with our zfs file server hence the file > system is unusable. > > We are planning to move it to a different system. > > > The setup on the file server when it was running was > > > bash-3.00# zpool status > > pool: store1 > > state: ONLINE > > scrub: none requested > > config: > > > NAME STATE READ WRITE CKSUM > > backup ONLINE 0 0 0 > > c1t2d1 ONLINE 0 0 0 > > c1t2d2 ONLINE 0 0 0 > > c1t2d3 ONLINE 0 0 0 > > c1t2d4 ONLINE 0 0 0 > > c1t2d5 ONLINE 0 0 0 > > > errors: No known data errors > > > pool: store2 > > state: ONLINE > > status: One or more devices has experienced an unrecoverable error. An > > attempt was made to correct the error. Applications are > unaffected. > > action: Determine if the device needs to be replaced, and clear the > errors > > using ''zpool clear'' or replace the device with ''zpool replace''. > > see: http://www.sun.com/msg/ZFS-8000-9P > > scrub: none requested > > config: > > > NAME STATE READ WRITE CKSUM > > store ONLINE 0 0 1 > > c1t3d0 ONLINE 0 0 0 > > c1t3d1 ONLINE 0 0 0 > > c1t3d2 ONLINE 0 0 1 > > c1t3d3 ONLINE 0 0 0 > > c1t3d4 ONLINE 0 0 0 > > > errors: No known data errors > > > The store1 was a external raid device with slice configured to boot the > system+swap and the remaining disk space configured for use with zfs. > > > The store2 was a similar external raid device which had all slices > configured for use for zfs. > > > Since both are scsi raid devices, we are thinking of booting up the former > using a different SUN Box. > > > Are there some precautions to be taken to avoid any data loss? > > > Thanks, > > --W > > > > Just make sure the external storage is not connected to both hosts at the > same time. > > Once you connect it to another host simply import both pools with -f (as > pool wasn''t cleanly exported I guess). > > > > Please also notice that you''ve encountered one uncorrectable error in > store2 pool. > > Well, actually it looks like it was corrected judging from the message. > > IIRC it''s a known bug (should have been already corrected) - metadata > cksum error propagates to top level vdev unnecessarily. > > > -- > > Best regards, > > Robert Milkowski mailto:rmilkowski at task.gda.pl<rmilkowski at task.gda.pl> > > http://milek.blogspot.com >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20071209/9cba4121/attachment.html>