Displaying 9 results from an estimated 9 matches for "ocfs2_reserve_suballoc_bit".
Did you mean:
ocfs2_reserve_suballoc_bits
2011 Sep 05
0
Slow performance
...ith more than 20.000 user, so they are constantly creating,
removing and moving files.
Dumping the processes in D state when the server is in "constant few
kbytes read only state", they look like:
node#0:
10739 D imapd ocfs2_lookup_lock_orphan_dir
11658 D imapd ocfs2_reserve_suballoc_bits
12326 D imapd ocfs2_lookup_lock_orphan_dir
12330 D pop3d lock_rename
12351 D imapd ocfs2_lookup_lock_orphan_dir
12357 D imapd ocfs2_lookup_lock_orphan_dir
12359 D imapd unlinkat
12381 D imapd ocfs2_lookup_lock_orphan_dir...
2009 Feb 26
3
[PATCH 0/3] ocfs2-1.4: Backport inode alloc from mainline.
Hi all,
this patch set are the backport of inode alloc improvement from
mainline to ocfs2-1.4.
the patches are almost the same excpet one thing:
Joel has added JBD2 support to ocfs2, so he has added "max_blocks" to
alloc_context and add a new function
"ocfs2_reserve_clusters_with_limit". We don't have that in ocfs2-1.4. So
there are some great difference in patch 2.
2009 Feb 24
2
[PATCH 1/3] ocfs2: Optimize inode allocation by remembering last group.
In ocfs2, the inode block search looks for the "emptiest" inode
group to allocate from. So if an inode alloc file has many equally
(or almost equally) empty groups, new inodes will tend to get
spread out amongst them, which in turn can put them all over the
disk. This is undesirable because directory operations on conceptually
"nearby" inodes force a large number of seeks.
So
2009 Jan 15
5
[PATCH 0/3] ocfs2: Inode Allocation Strategy Improvement.v2
Changelog from V1 to V2:
1. Modify some codes according to Mark's advice.
2. Attach some test statistics in the commit log of patch 3 and in
this e-mail also. See below.
Hi all,
In ocfs2, when we create a fresh file system and create inodes in it,
they are contiguous and good for readdir+stat. While if we delete all
the inodes and created again, the new inodes will get spread out and
that
2009 Apr 17
26
OCFS2 1.4: Patches backported from mainline
Please review the list of patches being applied to the ocfs2 1.4 tree.
All patches list the mainline commit hash.
Thanks
Sunil
2013 Feb 21
1
[PATCH] the ac->ac_allow_chain_relink=0 won't disable group relink
From: "Xiaowei.Hu" <xiaowei.hu at oracle.com>
ocfs2_block_group_alloc_discontig() disables chain relink by setting
ac->ac_allow_chain_relink = 0 because it grabs clusters from multiple
cluster groups. It doesn't keep the credits for all chain relink,but
ocfs2_claim_suballoc_bits overrides this in this call trace:
2009 Mar 17
33
[git patches] Ocfs2 updates for 2.6.30
Hi,
The following patches comprise the bulk of Ocfs2 updates for the
2.6.30 merge window. Aside from larger, more involved fixes, we're adding
the following features, which I will describe in the order their patches are
mailed.
Sunil's exported some more state to our debugfs files, and
consolidated some other aspects of our debugfs infrastructure. This will
further aid us in debugging
2008 Apr 02
10
[PATCH 0/62] Ocfs2 updates for 2.6.26-rc1
...c to dlmdebug.c
ocfs2/dlm: Fix lockname in lockres print function
ocfs2/dlm: Cleanup lockres print
Tao Ma (6):
ocfs2: Reconnect after idle time out.
ocfs2: Add support for cross extent block
ocfs2: Enable cross extent block merge.
ocfs2: Add a new parameter for ocfs2_reserve_suballoc_bits
ocfs2: Add ac_alloc_slot in ocfs2_alloc_context
ocfs2: Add inode stealing for ocfs2_reserve_new_inode
2008 Sep 04
4
[PATCH 0/3] ocfs2: Switch over to JBD2.
ocfs2 currently uses the Journaled Block Device (JBD) for its
journaling. This is a very stable and tested codebase. However, JBD
is limited by architecture to 32bit block numbers. This means an ocfs2
filesystem is limited to 2^32 blocks. With a 4K blocksize, that's 16TB.
People want larger volumes.
Fortunately, there is now JBD2. JBD2 adds 64bit block number support
and some other