Hi all, When testing our programs, I got a problem. On UFS, we get the number of free inode via ''df -e'', then do some things based this value, such as create an empty file, the value will decrease by 1. But on ZFS, it does not work. I still can get a number via ''df -e'', and create a same empty file, the value is not my expectation. So I use a loop to produce empty files and watch the output of ''df -e''. After some long time, the number is 671, then 639, 641, 603, 605, 609, 397, 607... I check the number of files, yes, it increases steadily. Could you explain it? Thanks, -- John Cui x82195
Hello John, Thursday, November 9, 2006, 12:03:58 PM, you wrote: JC> Hi all, JC> When testing our programs, I got a problem. On UFS, we get the number of JC> free inode via ''df -e'', then do some things based this value, such as JC> create an empty file, the value will decrease by 1. But on ZFS, it does JC> not work. I still can get a number via ''df -e'', and create a same empty JC> file, the value is not my expectation. So I use a loop to produce empty JC> files and watch the output of ''df -e''. After some long time, the number JC> is 671, then 639, 641, 603, 605, 609, 397, 607... JC> I check the number of files, yes, it increases steadily. JC> Could you explain it? UFS has static number of inodes in a given file system so it''s easy to say how much free inodes are left. ZFS creates inodes on demand so you can''t say how much inodes you can create - however I guess one could calculate maximum possible number of inodes to be created given free space in a pool/fs. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Robert Milkowski wrote:> Hello John, > > Thursday, November 9, 2006, 12:03:58 PM, you wrote: > > JC> Hi all, > JC> When testing our programs, I got a problem. On UFS, we get the number of > JC> free inode via ''df -e'', then do some things based this value, such as > JC> create an empty file, the value will decrease by 1. But on ZFS, it does > JC> not work. I still can get a number via ''df -e'', and create a same empty > JC> file, the value is not my expectation. So I use a loop to produce empty > JC> files and watch the output of ''df -e''. After some long time, the number > JC> is 671, then 639, 641, 603, 605, 609, 397, 607... > JC> I check the number of files, yes, it increases steadily. > > JC> Could you explain it? > > UFS has static number of inodes in a given file system so it''s easy to > say how much free inodes are left. > > ZFS creates inodes on demand so you can''t say how much inodes you can > create - however I guess one could calculate maximum possible number > of inodes to be created given free space in a pool/fs. >Yup, this is what ZFS does. It makes a *very rough* estimate of how many empty files could be created given the amount of available space. This number may be useful as some sort of upper bound, but no more than that. -Mark
A UFS file system has a fixed number of inodes, set when the file system is created. df can simply report how many of those have been used, and how many are free. Most file systems, including ZFS and QFS, allocate inodes dynamically. In this case, there really isn?t a ?number of files free? that df can report. The best that we can do is to ?guess? how many files you might be able to create, but this will differ depending on file size, and may change depending on other I/O activity, since the available free space is used not only for inodes, but for other data or metadata on the file system. What are you trying to do based on this number? If you are simply trying to determine whether it should be possible to create a given number of files, for instance to preflight a copy operation, you could use the df/stat() result as a hint, though it may not be accurate. This message posted from opensolaris.org
Thanks for Anton, Robert and Mark''s replies. Your answer verified my observation, ;-) . The reason that I want to use up the inode is we need to test the behaviors in the case of both block and inode are used up. If only fill up the block, creating an empty file still succeeds. Thanks, Anton B. Rang wrote:> A UFS file system has a fixed number of inodes, set when the file system is created. df can simply report how many of those have been used, and how many are free. > > Most file systems, including ZFS and QFS, allocate inodes dynamically. In this case, there really isn?t a ?number of files free? that df can report. The best that we can do is to ?guess? how many files you might be able to create, but this will differ depending on file size, and may change depending on other I/O activity, since the available free space is used not only for inodes, but for other data or metadata on the file system. > > What are you trying to do based on this number? If you are simply trying to determine whether it should be possible to create a given number of files, for instance to preflight a copy operation, you could use the df/stat() result as a hint, though it may not be accurate. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- John Cui x82195
On 10 November, 2006 - John Cui sent me these 1,6K bytes:> Thanks for Anton, Robert and Mark''s replies. Your answer verified my > observation, ;-) . > > The reason that I want to use up the inode is we need to test the > behaviors in the case of both block and inode are used up. If only fill > up the block, creating an empty file still succeeds.In ZFS, both inode and data blocks take up space. When you run out of space, you can''t create something that takes up space anymore. Nothing magic, the disk gets full when it gets full. /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
>The reason that I want to use up the inode is we need to test the >behaviors in the case of both block and inode are used up. If only fill >up the block, creating an empty file still succeeds.Pretty much the only way to tell if you''ve used up all the space available for file nodes is to actually try creating a file, though if ''df -e'' returns 0 you *may* not be able to create any new files. It may be possible to create empty files (and very small ones) even if there are no free blocks (empty files don''t require data blocks, small files can be packed into the inode). To generate a full file system most efficiently, presuming you don''t actually need a particular mix of files, I''d suggest writing a single large file in large blocks (1 megabyte or more) until you get ENOSPC; then writing a smaller file in small blocks (512 bytes) until ENOSPC; then trying a smaller file once more; and then creating empty files until you run out of inodes. This should work pretty well on UFS, ZFS, and QFS, even though the three have quite different storage structures. This message posted from opensolaris.org
Anton B. Rang wrote:> Pretty much the only way to tell if you''ve used up all the space available for file nodes is to actually try creating a file, though if ''df -e'' returns 0 you *may* not be able to create any new files. It may be possible to create empty files (and very small ones) even if there are no free blocks (empty files don''t require data blocks, small files can be packed into the inode). > > To generate a full file system most efficiently, presuming you don''t actually need a particular mix of files, I''d suggest writing a single large file in large blocks (1 megabyte or more) until you get ENOSPC; then writing a smaller file in small blocks (512 bytes) until ENOSPC; then trying a smaller file once more; and then creating empty files until you run out of inodes. This should work pretty well on UFS, ZFS, and QFS, even though the three have quite different storage structures.Yes, it is what I am doing. I found a scenario when creating empty files, sometimes the length of the filename is effective. Thanks, -- John Cui x82195