Nick Anderson
2012-May-30 18:35 UTC
[Ocfs2-users] Reproducing fragmentation and out of space error
Recently I ran into a situation where an ocfs2 (1.4) volume was reporting it was out of space when it was not. Deleting some files helped short term but the problem quickly comes back. I believe this is due to the fragmentation bug that I have seen references to in the mailing list archive. I am trying to reproduce the problem on a test system so that I can validate that upgrading to 1.6 will resolve the issue. Curious if anyone has a script that will create a heavily fragmented fs. Also its not clear to me what specific debugfs.ocfs2 output I should be looking at to see when I am approaching the point when the error arises.
Hakan Koseoglu
2012-May-30 18:42 UTC
[Ocfs2-users] Reproducing fragmentation and out of space error
On 30 May 2012 19:35, Nick Anderson <nick at cmdln.org> wrote:> Recently I ran into a situation where an ocfs2 (1.4) volume was > reporting it was out of space when it was not. Deleting some files > helped short term but the problem quickly comes back. > > I believe this is due to the fragmentation bug that I have seen > references to in the mailing list archive. I am trying to reproduce the > problem on a test system so that I can validate that upgrading to 1.6 > will resolve the issue. > > Curious if anyone has a script that will create a heavily fragmented fs.Originally we had this problem way back in 2009. The following could replicate it pretty much quickly. i=1 until (false) do for k in `seq 1 100`; do mkdir -p testdir-$1-$i/testdir-$1-$i-$k for j in `seq 1 1000`; do echo test123 > testdir-$1-$i/testdir-$1-$i-$k/testfile-$j done done; let i=i+1 done Did the trick last time for me. The disk will be reported to fill up before actual space.
BIROL AKBAY
2012-May-30 18:46 UTC
[Ocfs2-users] Reproducing fragmentation and out of space error
Afaik, ocfs 1.4 has inode limitation. That s why we moved 1.6. Could u check free inodes. Sent from my iPhone On 30 May 2012, at 21:36, "Nick Anderson" <nick at cmdln.org> wrote:> Recently I ran into a situation where an ocfs2 (1.4) volume was > reporting it was out of space when it was not. Deleting some files > helped short term but the problem quickly comes back. > > I believe this is due to the fragmentation bug that I have seen > references to in the mailing list archive. I am trying to reproduce the > problem on a test system so that I can validate that upgrading to 1.6 > will resolve the issue. > > Curious if anyone has a script that will create a heavily fragmented fs. > > Also its not clear to me what specific debugfs.ocfs2 output I should be > looking at to see when I am approaching the point when the error arises. > > _______________________________________________ > Ocfs2-users mailing list > Ocfs2-users at oss.oracle.com > http://oss.oracle.com/mailman/listinfo/ocfs2-users
Nick Anderson
2012-May-30 18:55 UTC
[Ocfs2-users] Reproducing fragmentation and out of space error
On 05/30/2012 01:42 PM, Hakan Koseoglu wrote:> Originally we had this problem way back in 2009. The following could > replicate it pretty much quickly. > > i=1 > until (false) do > for k in `seq 1 100`; do > mkdir -p testdir-$1-$i/testdir-$1-$i-$k > for j in `seq 1 1000`; do > echo test123> testdir-$1-$i/testdir-$1-$i-$k/testfile-$j > done > done; > let i=i+1 > doneThanks trying it now, just looking at it I suspect I will run out of inodes and report out of space error.
Nick Anderson
2012-May-31 13:25 UTC
[Ocfs2-users] Reproducing fragmentation and out of space error
On 05/30/2012 01:55 PM, Nick Anderson wrote:> Thanks trying it now, just looking at it I suspect I will run out of > inodes and report out of space error.Hakan, your script just ran me out of inodes, maybe you already had a fragmented file-system to start from and it triggered it faster. Is there some specific output from debugfs.ocfs that will tell me when I am approaching the situation? Right now I have a script running that does the following. Create 1-3 small files, size between 1k and 7k, copy the small file, prepend some data to it then move it back on top of the original. Create a large file (starting at 20M and increasing 10k each time) then loop back to small files. When the file system fills up I delete the oldest 20 large files and continue. That seems like it should cause some fragmentation. So far I am still been unable to reproduce the out of space error when I have free space and free inodes.