I have a zpool that once it hit 96% full the performance degraded horribly. So, in order to get things better I''m trying to clear out some space. The problem I have is after I''ve deleted a directory it no longer shows on the filesystem level (ls) but the free space isn''t cleared up. After a reboot, the directory is back. user at server:/tank# df -h /tank Filesystem Size Used Avail Use% Mounted on tank 5.4T 5.3T 124G 98% /tank user at server:/tank# du -csh directory_to_clear 18G directory_to_clear 18G total user at server:/tank# rm -Rf directory_to_clear user at server:/tank# df -h /tank Filesystem Size Used Avail Use% Mounted on tank 5.4T 5.3T 124G 98% /tank user at server:/tank# zfs list -r -t snapshot tank no datasets available zfs version 3, zpool version 22, opensolaris build 130. Any thoughts? -- This message posted from opensolaris.org
Oh, a few items to highlight. There are no snapshots - never have been on this volume. It''s not just this directory in the example - it''s any directory or file. The system was running fine up until it hit 96%. Also, a full scrub of the file system was done (took nearly two days). -- This message posted from opensolaris.org
On Mon, Oct 25, 2010 at 4:57 PM, Cuyler Dingwell <cuyler at gmail.com> wrote:> It''s not just this directory in the example - it''s any directory or file. ?The system was running fine up until it hit 96%. ?Also, a full scrub of the file system was done (took nearly two days). > --I''m just stabbing in the dark here, but are you certain /tank/directory_to_clear is not a separate dataset visible with zfs list -t filesystem -- - Tuomas
No datasets in the pool. As another data point I''ve been slowing trying to clear things out but eventually the IO operations hang. Pool Free Dir Used File Used File ... 189,238,526 44,771,026 102,413 FileName.part103.rar 189,238,526 44,668,613 102,413 FileName.part104.rar 189,238,526 44,566,201 102,413 FileName.part105.rar 189,238,526 44,463,788 102,413 FileName.part106.rar 189,238,526 44,361,376 102,413 FileName.part107.rar 189,238,526 44,258,963 102,413 FileName.part108.rar 189,238,526 44,156,551 102,413 FileName.part109.rar 189,238,526 44,054,138 102,388 FileName.part110.rar 189,238,526 43,951,750 102,413 FileName.part111.rar 189,238,526 43,849,338 102,413 FileName.part112.rar 189,238,526 43,746,925 102,413 FileName.part113.rar 189,238,526 43,644,513 102,413 FileName.part114.rar 189,238,788 43,542,100 102,414 FileName.part115.rar 189,240,308 43,439,686 102,413 FileName.part116.rar 189,242,745 43,337,274 102,413 FileName.part117.rar 353,519,874 43,234,854 102,413 FileName.part118.rar After this one the console isn''t responding. This is the only operation I have on the file system since the reboot this morning. Checking the io statistics the pool is getting accessed, I just have no idea what it''s doing. user at server:/opt/DTT# zpool iostat tank 5 5 capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- tank 6.95T 320G 164 20 360K 45.5K tank 6.95T 320G 67 0 148K 0 tank 6.95T 320G 71 0 153K 0 tank 6.95T 320G 70 0 151K 0 tank 6.95T 320G 69 0 149K 0 -- This message posted from opensolaris.org
On Mon, Oct 25, 2010 at 2:46 AM, Cuyler Dingwell <cuyler at gmail.com> wrote:> I have a zpool that once it hit 96% full the performance degraded horribly. ?So, in order to get things better I''m trying to clear out some space. ?The problem I have is after I''ve deleted a directory it no longer shows on the filesystem level (ls) but the free space isn''t cleared up. ?After a reboot, the directory is back.In order to free up some space, try cat''ing /dev/null over some files you plan on deleting. It should free up some space to allow metadata updates for the actual deletes to go through. -B -- Brandon High : bhigh at freaks.com
It wasn''t a completely full volume so I wasn''t getting the classic ''no space'' issue. What I did end up doing was booting OpenIndiana (build 147) which seemed tohave more succes clearing up the space. I also set up some scripts to clear out space slower. Deleting a 4GB file would take 1-2 minutes and then add a pause afterwards to allow the system to quiesce then continue. Once I got the pool down to ~90% I blew away the OpenSolaris installed (build 130), installed, OI and then upgraded the pools from version 22 to version 28. Things seem to be smoother now but it likely had to do with the space being cleared up. It would have been nice if performance didn''t take a nose dive when nearing (and not even at) capacity. In my case I would have preferred if the necessary space was reserved and I got a space issue before degrading to the point of uselessness. -- This message posted from opensolaris.org
On Oct 30, 2010, at 12:25 PM, Cuyler Dingwell wrote:> It would have been nice if performance didn''t take a nose dive when nearing (and not even at) capacity. In my case I would have preferred if the necessary space was reserved and I got a space issue before degrading to the point of uselessness.UFS reserved 10% so that the algorithms can find some space for just this reason. Many (all?) file systems have similar issues at some point. For later versions of ZFS, the change in the allocation algorithm from first fit to best fit occurs when the pool is 96% full. I suppose for some hardware configurations this could be viewed as a "nose dive." Clearly, it is a change in work required. -- richard