Peter Eriksson
2007-Apr-03 16:49 UTC
[zfs-discuss] Best way to migrate filesystems to ZFS?
I''m about to start migrating a lot of files on UFS filesystems from a Solaris 9 server to a new server running Solaris 10 (u3) with ZFS (a Thumper). Now... What''s the "best" way to move all these files? Should one use Solaris tar, Solaris cpio, ufsdump/ufsrestore, rsync or what? I currently use Solaris tar like this: cd $DIR && tar cpE at f - . | rsh $HOST "cd $NEWDIR && tar xpE at f -" ufsdump/ufsrestore doesn''t restore the ACLs so that doesn''t work, same with rsync. cpio will fail with filenames with strange characters (since Solaris cpio doesn''t support the "print0/0" options from Gnu find/cpio). Considering that this is some 4TB of files to move - what other alternatives are there? Is there some faster alternative to "rsh" - I wonder if a tool that was designed for interactive communications really is suited for multi-terabyte data transfers :-) Anyway - how did you convert your large filesystems? /Eagerly awaiting ZFS ACL support in Samba and rsync... This message posted from opensolaris.org
Richard Elling
2007-Apr-03 17:28 UTC
[zfs-discuss] Best way to migrate filesystems to ZFS?
Peter Eriksson wrote:> I''m about to start migrating a lot of files on UFS filesystems from a Solaris 9 server to a new server running Solaris 10 (u3) with ZFS (a Thumper). Now... What''s the "best" way to move all these files? Should one use Solaris tar, Solaris cpio, ufsdump/ufsrestore, rsync or what? > > I currently use Solaris tar like this: > cd $DIR && tar cpE at f - . | rsh $HOST "cd $NEWDIR && tar xpE at f -"seems simple enough :-)> ufsdump/ufsrestore doesn''t restore the ACLs so that doesn''t work, same with rsync.ufsrestore obviously won''t work on ZFS. If you use ACLs, then your task is harder because of the differences between ACL implementations. You might want to plan on an audit of whatever method you ultimately choose.> cpio will fail with filenames with strange characters (since Solaris cpio doesn''t support the "print0/0" options from Gnu find/cpio). Considering that this is some > 4TB of files to move - what other alternatives are there?pax, gtar (on companion CD for S9, IIRC), rdist, et.al. Surely there must be more than 100 ways to do this.> Is there some faster alternative to "rsh" - I wonder if a tool that was designed for > interactive communications really is suited for multi-terabyte data transfers :-)You''ll likely be wire-speed or disk bound. I would recommend ssh over rsh as a best practice.> Anyway - how did you convert your large filesystems? > > /Eagerly awaiting ZFS ACL support in Samba and rsync...Check the zfs-discuss archives, this topic comes up every other month or so. -- richard
> > I currently use Solaris tar like this: > > cd $DIR && tar cpE at f - . | rsh $HOST "cd $NEWDIR && tar xpE at f -" > > seems simple enough :-) > > > ufsdump/ufsrestore doesn''t restore the ACLs so that doesn''t work, > > same with rsync. > > ufsrestore obviously won''t work on ZFS.Is this obvious? I''m sure it''s not working well today, but would it be possible for it to translate the UFS ACLs to ZFS ACLs, or are there fundamental mapping issues that it cannot span by itself? -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. >
Robert Thurlow
2007-Apr-03 17:44 UTC
[zfs-discuss] Best way to migrate filesystems to ZFS?
Richard Elling wrote:> Peter Eriksson wrote:>> ufsdump/ufsrestore doesn''t restore the ACLs so that doesn''t work, same >> with rsync. > > ufsrestore obviously won''t work on ZFS.ufsrestore works fine; it only reads from a ''ufsdump'' format medium and writes through generic filesystem APIs. I did some of this last week. ACLs, as noted, won''t get written out to ZFS. Rob T
Mark Shellenbaum
2007-Apr-03 17:54 UTC
[zfs-discuss] Best way to migrate filesystems to ZFS?
Robert Thurlow wrote:> Richard Elling wrote: >> Peter Eriksson wrote: > >>> ufsdump/ufsrestore doesn''t restore the ACLs so that doesn''t work, >>> same with rsync. >> >> ufsrestore obviously won''t work on ZFS. > > ufsrestore works fine; it only reads from a ''ufsdump'' format medium and > writes through generic filesystem APIs. I did some of this last week. > ACLs, as noted, won''t get written out to ZFS. >Actually current ufsrestore can restore ACLs from UFS to ZFS # newfs /dev/dsk/c1t0d0s0 newfs: construct a new file system /dev/rdsk/c1t0d0s0: (y/n)? y Warning: 4164 sector(s) in last cylinder unallocated /dev/rdsk/c1t0d0s0: 52938684 sectors in 8617 cylinders of 48 tracks, 128 sectors 25849.0MB in 539 cyl groups (16 c/g, 48.00MB/g, 5824 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920, Initializing cylinder groups: .......... super-block backups for last 10 cylinder groups at: 52005024, 52103456, 52201888, 52300320, 52398752, 52497184, 52595616, 52694048, 52792480, 52890912 # mount /dev/dsk/c1t0d0s0 /mnt # cd /mnt # touch file.1 # setfacl -m user:marks:rwx file.1 # ufsdump 0vf /var/tmp/dump.out /dev/rdsk/c1t0d0s0 DUMP: Date of this level 0 dump: Tue Apr 03 11:51:19 2007 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/rdsk/c1t0d0s0 (rousay.Central.Sun.COM:/mnt) to /var/tmp/dump.out. DUMP: Mapping (Pass I) [regular files] DUMP: Mapping (Pass II) [directories] DUMP: Writing 32 Kilobyte records DUMP: Estimated 1630 blocks (815KB). DUMP: Dumping (Pass III) [directories] DUMP: Dumping (Pass IV) [regular files] DUMP: Finished writing last dump volume DUMP: Starting verify pass DUMP: 1598 blocks (799KB) on 1 volume at 19487 KB/sec DUMP: DUMP IS DONE # ls file.1 lost+found # cd /rootpool # ls boot # mkdir test # ufsrestore rvf /var/tmp/dump.out Verify volume and initialize maps Media block size is 126 Dump date: Tue Apr 03 11:51:19 2007 Dumped from: the epoch Level 0 dump of /mnt on rousay.Central.Sun.COM:/dev/dsk/c1t0d0s0 Label: none Begin level 0 restore Initialize symbol table. Extract directories from tape Calculate extraction list. Make node ./lost+found Extract new leaves. Check pointing the restore extract file ./file.1 Add links Set directory mode, owner, and times. Check the symbol table. Check pointing the restore # ls -l total 9 drwxr-xr-x 3 root root 3 Apr 2 08:21 boot -rw-r--r--+ 1 root root 0 Apr 3 11:50 file.1 drwx------ 2 root root 2 Apr 3 11:50 lost+found -rw------- 1 root root 2516764 Apr 3 11:51 restoresymtable drwxr-xr-x 2 root root 2 Apr 3 11:51 test # ls -V file.1 -rw-r--r--+ 1 root root 0 Apr 3 11:50 file.1 owner@:rw-p--aA--cC-s:------:allow owner@:--x-----------:------:deny user:marks:-wxp---A---C--:------:deny user:marks:rwxp--a---c--s:------:allow user:marks:-------A---C--:------:deny group@:-wxp---A---C--:------:deny group@:r-----a---c--s:------:allow group@:-wxp---A---C--:------:deny everyone@:r-----a---c--s:------:allow everyone@:-wxp---A---C--:------:deny
Pål Baltzersen
2007-May-15 16:10 UTC
[zfs-discuss] Re: Best way to migrate filesystems to ZFS?
I would use rsync; over NFS if possible otherwise over ssh: (NFS performs significantly better on read than write so preferably share from the old and mount on the new) old# share -F nfs -o r=@new,root=@new /my/data (or edit /etc/dfs/dfstab and shareall) new# mount -r old:/my/data /mnt new# ls -l /my/data/ ls: /my/data/: No such file or directory new# rsync -aRHDn --delete /mnt/ /my/data/ new# rsync -aRHD --delete /mnt/ /my/data/ new# umount /mnt Caution! The --delete option tells rsync to delete files it might exist under /my/data/ that aren''t on /mnt/, so dryrun with -n first to check! If you get interrupted or need incremental update, simply resume/catch up by repeating: new# rsync -aRHD --delete /mnt/ /my/data/ rsync will continue where it was interrupted (discarding the last incomplete file though) and remove deleted files (the --delete option) For verbosity you may want to add -v and --progress: new# rsync -aRHDv --delete --progress /mnt/ /my/data/ If you can''t share via NFS, like for security reasons: new# rsync -aRHDv --delete --progress old:/my/data/ / or old# rsync -aRHDv --delete --progress /my/data/ new:/ This is even possible via a gatekeeper: new# rsync -aRHDv --delete --progress -e ''ssh gate ssh'' old:/my/data/ / old# rsync -aRHDv --delete --progress -e ''ssh gate ssh'' /my/data/ new:/ P?l This message posted from opensolaris.org
Pål Baltzersen
2007-May-15 16:27 UTC
[zfs-discuss] Re: Best way to migrate filesystems to ZFS?
Sorry I realize I was a bit misleading in the path handling and need to correct this part: new# mount -r old:/my/data /mnt new# mkdir -p /my/data new# cd /mnt ; rsync -aRHDn --delete ./ /my/data/ new# cd /mnt ; rsync -aRHD --delete ./ /my/data/ new# umount /mnt .. new# cd /mnt ; rsync -aRHD --delete ./ /my/data/ .. new# cd /mnt ; rsync -aRHDv --delete --progress ./ /my/data/ This message posted from opensolaris.org