Hi all, I attempting to move my MDT to a new server and I''m seeing strange behavior when trying to restore the MDT tar file taken from the original disk to the new one. Basically, the existing MDT data appears to use about 2.5G, however when I restore the tar file on the new server it completely fills a 120G partition and the restoration fails with out of disk space errors?? I''m a little mystified as to what is happening here since the disk format is more or less the same, what follows is the disk information for the original disk followed by the new one. Any insights would be greatly appreciated... =============================< Original MDT>================================[root at mds1 data-MDT0000]# df -iFilesystem Inodes IUsed IFree IUse% Mounted on /dev/md0 78151680 3368714 74782966 5% /mnt/data [root at mds1 data-MDT0000]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 112G 2.5G 102G 3% /mnt/data [root at mds1 data-MDT0000]# tune2fs -l /dev/md0 tune2fs 1.40.11.sun1 (17-June-2008) device /dev/md0 mounted by lustre per /proc/fs/lustre/mds/data-MDT0000/mntdev Filesystem volume name: data-MDT0000 Last mounted on: <not available> Filesystem UUID: 57449444-0de9-42e3-a919-70d216bc01f5 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 78151680 Block count: 39070048 Reserved block count: 1953502 Free blocks: 28643263 Free inodes: 74782966 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1024 Blocks per group: 16384 Fragments per group: 16384 Inodes per group: 32768 Inode blocks per group: 4096 Filesystem created: Tue May 29 13:49:47 2007 Last mount time: Thu May 21 13:18:25 2009 Last write time: Thu May 21 13:18:25 2009 Mount count: 180 Maximum mount count: 22 Last checked: Tue May 29 13:49:47 2007 Check interval: 15552000 (6 months) Next check after: Sun Nov 25 12:49:47 2007 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 512 Journal inode: 8 Default directory hash: tea Directory Hash Seed: cf133553-2ae6-42a1-b0b5-cfbad6fd104b Journal backup: inode blocks =============================< New MDT>================================[root at mds2 ~]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md2 35848192 384427 35463765 2% /root/data [root at mds2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 120G 120G 0 100% /root/data [root at mds2 ~]# tune2fs -l /dev/md2 tune2fs 1.40.11.sun1 (17-June-2008) Filesystem volume name: data-MDT0000 Last mounted on: <not available> Filesystem UUID: d197b003-5ce5-4d1b-8253-b196b2009d07 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file uninit_groups Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 35848192 Block count: 35842992 Reserved block count: 1792149 Free blocks: 125970 Free inodes: 35463765 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1015 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 32768 Inode blocks per group: 4096 Filesystem created: Wed May 20 16:45:54 2009 Last mount time: Thu May 21 13:27:52 2009 Last write time: Thu May 21 13:27:52 2009 Mount count: 14 Maximum mount count: 26 Last checked: Wed May 20 16:45:54 2009 Check interval: 15552000 (6 months) Next check after: Mon Nov 16 15:45:54 2009 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 512 Journal inode: 8 Default directory hash: tea Directory Hash Seed: 819ff3d7-b341-4300-8cd8-b6a6b3020c4e Journal backup: inode blocks _________________________________________ Ron Jerome Programmer/Analyst National Research Council Canada M-2, 1200 Montreal Road, Ottawa, Ontario K1A 0R6 Government of Canada Phone: 613-993-5346 FAX: 613-941-1571 _________________________________________
On Thu, 2009-05-21 at 13:39 -0400, Jerome, Ron wrote:> Hi all,Hi Ron,> I attempting to move my MDT to a new server and I''m seeing strange > behavior when trying to restore the MDT tar file taken from the original > disk to the new one. Basically, the existing MDT data appears to use > about 2.5G, however when I restore the tar file on the new server it > completely fills a 120G partition and the restoration fails with out of > disk space errors??There has been a lot of discussion on this list about backing up the MDT for relocation. Please review the archives. IIRC, there was even somebody reporting this exact issue. b. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: This is a digitally signed message part Url : http://lists.lustre.org/pipermail/lustre-discuss/attachments/20090521/f222db39/attachment.bin
Hmmm, a little research in the archives leads me to believe that the --sparse option is required on the tar "create" command line. Would this be correct? BTW, this MDT is running 1.6.7.1 Thanks, _________________________________________ Ron Jerome Programmer/Analyst National Research Council Canada M-2, 1200 Montreal Road, Ottawa, Ontario K1A 0R6 Government of Canada Phone: 613-993-5346 FAX: 613-941-1571 _________________________________________> -----Original Message----- > From: lustre-discuss-bounces at lists.lustre.org [mailto:lustre-discuss- > bounces at lists.lustre.org] On Behalf Of Brian J. Murrell > Sent: May 21, 2009 2:15 PM > To: lustre-discuss at lists.lustre.org > Subject: Re: [Lustre-discuss] MDT backup/restore > > On Thu, 2009-05-21 at 13:39 -0400, Jerome, Ron wrote: > > Hi all, > > Hi Ron, > > > I attempting to move my MDT to a new server and I''m seeing strange > > behavior when trying to restore the MDT tar file taken from the > original > > disk to the new one. Basically, the existing MDT data appears touse> > about 2.5G, however when I restore the tar file on the new server it > > completely fills a 120G partition and the restoration fails with out > of > > disk space errors?? > > There has been a lot of discussion on this list about backing up the > MDT > for relocation. Please review the archives. IIRC, there was even > somebody reporting this exact issue. > > b.
Ok, replying to myself for the benefit of others who stumble upon this... Using the -S (or --sparse) argument on the tar command when archiving MDT/MDS file systems solves the issue of the restored files being larger than the original and thus not fitting on the target file system. I would suggest that adding this to the documentation might be beificial to all who attempt to move an MDS file system :-) Thanks to the Lustre team for all their hard work, Ron.> -----Original Message----- > From: lustre-discuss-bounces at lists.lustre.org [mailto:lustre-discuss- > bounces at lists.lustre.org] On Behalf Of Jerome, Ron > Sent: May 21, 2009 3:39 PM > To: lustre-discuss at lists.lustre.org > Cc: Brian J. Murrell > Subject: Re: [Lustre-discuss] MDT backup/restore > > Hmmm, a little research in the archives leads me to believe that the > --sparse option is required on the tar "create" command line. > > Would this be correct? > > BTW, this MDT is running 1.6.7.1 > > Thanks, > > _________________________________________ > Ron Jerome > Programmer/Analyst > National Research Council Canada > M-2, 1200 Montreal Road, Ottawa, Ontario K1A 0R6 > Government of Canada > Phone: 613-993-5346 > FAX: 613-941-1571 > _________________________________________ > > > > -----Original Message----- > > From: lustre-discuss-bounces at lists.lustre.org[mailto:lustre-discuss-> > bounces at lists.lustre.org] On Behalf Of Brian J. Murrell > > Sent: May 21, 2009 2:15 PM > > To: lustre-discuss at lists.lustre.org > > Subject: Re: [Lustre-discuss] MDT backup/restore > > > > On Thu, 2009-05-21 at 13:39 -0400, Jerome, Ron wrote: > > > Hi all, > > > > Hi Ron, > > > > > I attempting to move my MDT to a new server and I''m seeing strange > > > behavior when trying to restore the MDT tar file taken from the > > original > > > disk to the new one. Basically, the existing MDT data appears to > use > > > about 2.5G, however when I restore the tar file on the new server > it > > > completely fills a 120G partition and the restoration fails with > out > > of > > > disk space errors?? > > > > There has been a lot of discussion on this list about backing up the > > MDT > > for relocation. Please review the archives. IIRC, there was even > > somebody reporting this exact issue. > > > > b. > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-discuss