David Wolfskill
2009-Jan-02 15:49 UTC
newfs(8) parameters from "dumpfs -m" have bad -s value?
I have a requirement to be able to re-create a largish file system on occasion. The file system was created with some non-default newfs(8) parameters. Once I found about it, it seemed that the output of "dumpfs -m" would be ideal to use in the script that performs the analysis & (if appropriate) re-creation. But I found that that output (in this case, at least) includes a rather bogus "-s" parameter. A circumvention is to interpose a sed(1) invocation to elide the parameter, but that seems a tad ... ugly (though admittedly effective). I'm running 7.1-RC1 on the systems in question. Here's the supporting evidence: We start with one of the file systems in question as it is supposed to be: pool10(7.1-RC1)[32] df -ki /dev/da1s1d Filesystem 1024-blocks Used Avail Capacity iused ifree %iused Mounted on /dev/da1s1d 1702753030 4 1566532784 0% 2 220046332 0% /b pool10(7.1-RC1)[33] Here's what dumpfs(8) says: pool10(7.1-RC1)[36] dumpfs -m /dev/da1s1d # newfs command for /dev/da1s1d (/dev/da1s1d) newfs -O 2 -U -a 8 -b 16384 -d 16384 -e 2048 -f 2048 -g 16384 -h 64 -m 8 -o time -s 879031908 /dev/da1s1d pool10(7.1-RC1)[37] I then unmount the file system & re-create it naively: pool10(7.1-RC1)[37] umount /dev/da1s1d pool10(7.1-RC1)[38] dumpfs -m /dev/da1s1d | sh /dev/da1s1d: 429214.8MB (879031908 sectors) block size 16384, fragment size 2048 using 2336 cylinder groups of 183.77MB, 11761 blks, 23552 inodes. with soft updates super-block backups (for fsck -b #) at: 160, 376512, 752864, 1129216, 1505568, 1881920, 2258272, 2634624, 3010976, ... 876523968, 876900320, 877276672, 877653024, 878029376, 878405728, 878782080 pool10(7.1-RC1)[39] df -ki /dev/da1s1d Filesystem 1024-blocks Used Avail Capacity iused ifree %iused Mounted on /dev/da1s1d 425686716 4 391631776 0% 2 55017468 0% pool10(7.1-RC1)[40] The file system had been 1702753030 KB; it is now 425686716 KB -- 25% of its intended size. By eliding the -s parameter, we get: pool10(7.1-RC1)[40] dumpfs -m /b | sed -Ee 's/ -s [0-9]+ / /' | sh /dev/da1s1d: 1716859.2MB (3516127632 sectors) block size 16384, fragment size 2048 using 9343 cylinder groups of 183.77MB, 11761 blks, 23552 inodes. with soft updates super-block backups (for fsck -b #) at: 160, 376512, 752864, 1129216, 1505568, 1881920, 2258272, 2634624, 3010976, ... 3513622432, 3513998784, 3514375136, 3514751488, 3515127840, 3515504192, 3515880544 pool10(7.1-RC1)[41] df -ki /dev/da1s1d Filesystem 1024-blocks Used Avail Capacity iused ifree %iused Mounted on /dev/da1s1d 1702753030 4 1566532784 0% 2 220046332 0% pool10(7.1-RC1)[42] [Sorry about the long lines....] Is dumpfs(8) actually behaving as expected (or correctly) in this case? Peace, david -- David H. Wolfskill david@catwhisker.org Depriving a girl or boy of an opportunity for education is evil. See http://www.catwhisker.org/~david/publickey.gpg for my public key. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: not available Url : http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20090102/e0bb45dc/attachment.pgp
Oliver Fromme
2009-Jan-05 19:24 UTC
newfs(8) parameters from "dumpfs -m" have bad -s value?
David Wolfskill wrote: > pool10(7.1-RC1)[32] df -ki /dev/da1s1d > Filesystem 1024-blocks Used Avail Capacity iused ifree %iused Mounted on > /dev/da1s1d 1702753030 4 1566532784 0% 2 220046332 0% /b > > Here's what dumpfs(8) says: > > pool10(7.1-RC1)[36] dumpfs -m /dev/da1s1d > # newfs command for /dev/da1s1d (/dev/da1s1d) > newfs -O 2 -U -a 8 -b 16384 -d 16384 -e 2048 -f 2048 -g 16384 -h 64 -m 8 -o time -s 879031908 /dev/da1s1d This seems to be a bug in dumpfs(8). It simply prints the value of the fs_size field of the superblock, which is wrong. The -s option of newfs(8) expects the available size in sectors (i.e. 512 bytes), but the fs_size field contains the size of the file system in 2KB units. This seems to be the fragment size, but I'm not sure if this is just coincidence (the docs state that it's the size in blocks, but this is misleading because the blocksize is usually different; the default is 16K). So, dumpfs(8) needs to be fixed to perform the proper calculations when printing the value for the -s option. Unfortunately I'm not sufficiently much of a UFS guru to offer a fix. My best guess would be to multiply the fs_size value by the fragment size (measured in 512 byte units), i.e. multiply by 4 in the most common case. But I'm afraid the real solution is not that simple. Best regards Oliver -- Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M. Handelsregister: Registergericht Muenchen, HRA 74606, Gesch?ftsfuehrung: secnetix Verwaltungsgesellsch. mbH, Handelsregister: Registergericht M?n- chen, HRB 125758, Gesch?ftsf?hrer: Maik Bachmann, Olaf Erb, Ralf Gebhart FreeBSD-Dienstleistungen, -Produkte und mehr: http://www.secnetix.de/bsd (On the statement print "42 monkeys" + "1 snake":) By the way, both perl and Python get this wrong. Perl gives 43 and Python gives "42 monkeys1 snake", when the answer is clearly "41 monkeys and 1 fat snake". -- Jim Fulton