Displaying 10 results from an estimated 10 matches for "defragger".
2005 Mar 02
3
searching for ext3 defrag/file move program
Hello everybody,
reading about the speed improvements possible with (on boot) preloaded files
(which should be continuous on disk) I searched for a ext3 defrag program.
I found an ext2 defrag program
(http://www.ibiblio.org/pub/Linux/system/filesystems/defrag-0.70.tar.gz,
available in debian as defrag) which would have an optimal feature (moving
files by a list) but refuses to work on ext3.
2009 Sep 15
1
FYI: Why is NFS slower on EL5 than EL4?
...mends disabling this latency
to get around it, but in doing that you might as well just use the
deadline scheduler.
Some other interesting tidbits for those using NFS.
NFS with many nfsd threads will create a lot of file system fragments
when writing large files.
XFS does have the advantage of a defragger (I hope some smart person
will develop one for ext2/3/4), but I am still looking for a better
way to protect against fragmentation then running a defragger, so if
anyone has an alternative I'm all ears.
Another hint for those using NFS with ESX, make sure your vmdk
partitions are 4k aligned or...
2007 Feb 12
17
NFS/ZFS performance problems - txg_wait_open() deadlocks?
Hi.
System is snv_56 sun4u sparc SUNW,Sun-Fire-V440, zil_disable=1
We see many operation on nfs clients to that server really slow (like 90 seconds for unlink()).
It''s not a problem with network, there''s also plenty oc CPU available.
Storage isn''t saturated either.
First strange thing - normally on that server nfsd has about 1500-2500 number of threads.
I did
2005 Dec 16
1
Repacking files on ext3?
...that previous thread, it was stated that it was impossible to
have defrag work safely on an ext3 partition. I do not see why this is
so. If ext2 could be given a journal to make it crash resistant, then
why can't defrag be given the same thing?
Another comment was made saying that such a defragger could not possibly
be done only in userspace. Unless you're talking about online
defragmentation, then manipulating the filesystem offline to repack it
is entirely possible from userspace.
Is there still no way to do what I am trying?
P.S. Please CC me on replies, as I am not subscribed.
2012 Jan 07
14
zfs defragmentation via resilvering?
Hello all,
I understand that relatively high fragmentation is inherent
to ZFS due to its COW and possible intermixing of metadata
and data blocks (of which metadata path blocks are likely
to expire and get freed relatively quickly).
I believe it was sometimes implied on this list that such
fragmentation for "static" data can be currently combatted
only by zfs send-ing existing
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on
disk will be 128K whenever possible. But if you''re using raidzN with a
capacity of M disks (M disks useful capacity + N disks redundancy) then the
block size on each individual disk will be 128K / M. Right? This is one of
the reasons the raidzN resilver code is inefficient. Since you end up
waiting for the
2012 Apr 26
7
[PATCH 2/4] Btrfs: fix deadlock on sb->s_umount when doing umount
The reason the deadlock is that:
Task Btrfs-cleaner
umount()
down_write(&s->s_umount)
sync_filesystem()
do auto-defragment and produce
lots of dirty pages
close_ctree()
wait for the end of
btrfs-cleaner
start_transaction
reserve space
shrink_delalloc()
writeback_inodes_sb_nr_if_idle()
2010 Jan 18
18
Is ZFS internal reservation excessive?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
zpool and zfs report different free space because zfs takes into account
an internal reservation of 32MB or 1/64 of the capacity of the pool,
what is bigger.
So in a 2TB Harddisk, the reservation would be 32 gigabytes. Seems a bit
excessive to me...
- --
Jesus Cea Avion _/_/ _/_/_/ _/_/_/
jcea at jcea.es -
2011 Aug 23
40
[PATCH 00/21] [RFC] Btrfs: restriper
Hello,
This patch series adds an initial implementation of restriper (it''s a
clever name for relocation framework that allows to do selective profile
changing and selective balancing with some goodies like pausing/resuming
and reporting progress to the user.
Profile changing is global (per-FS) so far, per-subvolume profiles
require some discussion and can be implemented in future.
2011 Oct 04
68
[patch 00/65] Error handling patchset v3
Hi all -
Here''s my current error handling patchset, against 3.1-rc8. Almost all of
this patchset is preparing for actual error handling. Before we start in
on that work, I''m trying to reduce the surface we need to worry about. It
turns out that there is a ton of code that returns an error code but never
actually reports an error.
The patchset has grown to 65 patches. 46 of them