I set up a ZFS system on a Linux x86 box. [b]> zpool history History for ''raidpool'': 2009-01-15.17:12:48 zpool create -f raidpool raidz1 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0 2009-01-15.17:15:54 zfs create -o mountpoint=/vol01 -o sharenfs=on -o canmount=on raidpool/vol01[/b] I did not make the export (vol01) into a volume. I know you can set default blocksizes when you create volumes but you cannot make them exportable NFS exports. Thus, I did not make the NFS exports into volumes and I did not specify a blocksize on the NFS exports. I am assuming that vol01 is using variable blocksizes because I did not explicitly specify a blocksize. Thus, my assumption is that ZFS would use a blocksize that is the the smallest power of 2 and the smallest blocksize is 512 bytes while the biggest would be 128k I use the stat command to check the filesize, the blocksize, and the # of blocks. I created a file that is exactly 512 bytes in size on /vol01 I do the following stat command: [b]stat --printf "%n %b %B %s %o\n" * [/b] The %b is the number of blocks used, %B is the blocksize. The number of blocks changes after a few minutes after the file is created: # stat --printf "%n %b %B %s %o\n" * file.512 [b]1[/b] 512 512 4096 # stat --printf "%n %b %B %s %o\n" * file.512 [b]1[/b] 512 512 4096 # stat --printf "%n %b %B %s %o\n" * file.512 [b]1[/b] 512 512 4096 Q1) Why does the # of blocks change after a few minutes? And why are we using 3 blocks when the file is only 512 bytes in size (in other words, only 1 block is needed)??? This makes it seem that the minimum blocksize isn''t 512 bytes but 1536 bytes. Q2) Is there a way to force ZFS to use 512 blocksizes? That means that if a file is 512 bytes in size or smaller, it should only use 512 bytes -- the number of blocks it uses should be 1. I don''t know how to do that using the zpool or zfs options. -- This message posted from opensolaris.org
Mattias Pantzare
2009-Feb-03 22:39 UTC
[zfs-discuss] Why does a file on a ZFS change sizes?
On Tue, Feb 3, 2009 at 20:55, SQA <sqa777 at gmail.com> wrote:> I set up a ZFS system on a Linux x86 box. > > [b]> zpool history > > History for ''raidpool'': > 2009-01-15.17:12:48 zpool create -f raidpool raidz1 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0 > 2009-01-15.17:15:54 zfs create -o mountpoint=/vol01 -o sharenfs=on -o canmount=on raidpool/vol01[/b] > > I did not make the export (vol01) into a volume. I know you can set default blocksizes when you create volumes but you cannot make them exportable NFS exports. Thus, I did not make the NFS exports into volumes and I did not specify a blocksize on the NFS exports. > > I am assuming that vol01 is using variable blocksizes because I did not explicitly specify a blocksize. Thus, my assumption is that ZFS would use a blocksize that is the the smallest power of 2 and the smallest blocksize is 512 bytes while the biggest would be 128k > > I use the stat command to check the filesize, the blocksize, and the # of blocks. > > I created a file that is exactly 512 bytes in size on /vol01 > > I do the following stat command: > [b]stat --printf "%n %b %B %s %o\n" * [/b] > > The %b is the number of blocks used, %B is the blocksize. > > The number of blocks changes after a few minutes after the file is created: > > # stat --printf "%n %b %B %s %o\n" * > file.512 [b]1[/b] 512 512 4096 > # stat --printf "%n %b %B %s %o\n" * > file.512 [b]1[/b] 512 512 4096 > # stat --printf "%n %b %B %s %o\n" * > file.512 [b]1[/b] 512 512 4096 > > Q1) Why does the # of blocks change after a few minutes? And why are we using 3 blocks when the file is only 512 bytes in size (in other words, only 1 block is needed)??? This makes it seem that the minimum blocksize isn''t 512 bytes but 1536 bytes.You probably have a cut''n''paste error as all block numbers are 1 in your example. My guess is that the number of blocks are updated every 5 seconds.> > Q2) Is there a way to force ZFS to use 512 blocksizes? That means that if a file is 512 bytes in size or smaller, it should only use 512 bytes -- the number of blocks it uses should be 1.It is, or at least is is on my solaris system. But it has to store metadata in one block. Try creating a 600 byte file and it should use one more 512 byte block.
Mattias, Sorry for the bad cut-and-paste in my previous post. It indeed is 3 blocks, not 1. I did what you said but the ZFS system added 2 extra blocks, not 1. The file was 608 bytes in size, which should be 2 blocks. With the metadata info, it should have used up 3 blocks, not 4. It''s adding 4 though. stat --printf "%n %b %B %s %o\n" file.608 file.608 [b]1[/b] 512 608 4096 stat --printf "%n %b %B %s %o\n" file.608 file.608 [b]4[/b] 512 608 4096 stat --printf "%n %b %B %s %o\n" file.608 file.608 [b]4[/b] 512 608 4096 Any idea why it''s adding 2 blocks instead of one? -- This message posted from opensolaris.org
Anybody have a guess to the cause of this problem? -- This message posted from opensolaris.org
On 03 February, 2009 - SQA sent me these 0,8K bytes:> Mattias, > > Sorry for the bad cut-and-paste in my previous post. It indeed is 3 blocks, not 1. > > I did what you said but the ZFS system added 2 extra blocks, not 1. > The file was 608 bytes in size, which should be 2 blocks. With the > metadata info, it should have used up 3 blocks, not 4. It''s adding 4 > though.My guess is that as you have a 5 disk raidz1, which has 4 "data disks". Raidz only does whole-stripe writes, so minimum is covering a whole stripe which in your case is 4 blocks. /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se