Displaying 8 results from an estimated 8 matches for "btrfs_init_work".
2013 Aug 29
0
[PATCH] Btrfs: don't use an async starter for most of our workers
...-------
1 files changed, 4 insertions(+), 7 deletions(-)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 69e9afb..fa8b2c6 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -2482,20 +2482,17 @@ int open_ctree(struct super_block *sb,
&fs_info->generic_worker);
btrfs_init_workers(&fs_info->delalloc_workers, "delalloc",
- fs_info->thread_pool_size,
- &fs_info->generic_worker);
+ fs_info->thread_pool_size, NULL);
btrfs_init_workers(&fs_info->flush_workers, "flush_delalloc",
- fs_info->thread_pool_siz...
2013 Apr 25
10
[PATCH v4 0/3] Btrfs: quota rescan for 3.10
The kernel side for rescan, which is needed if you want to enable qgroup
tracking on a non-empty volume. The first patch splits
btrfs_qgroup_account_ref into readable ans reusable units. The second
patch adds the rescan implementation (refer to its commit message for a
description of the algorithm). The third patch starts an automatic
rescan when qgroups are enabled. It is only separated to
2013 Aug 29
4
[PATCH] Notify caching_thread()s to give up on extent_commit_sem when needed.
caching_thread()s do all their work under read access to extent_commit_sem.
They give up on this read access only when need_resched() tells them, or
when they exit. As a result, somebody that wants a WRITE access to this sem,
might wait for a long time. Especially this is problematic in
cache_block_group(),
which can be called on critical paths like find_free_extent() and in commit
path via
2011 Jun 10
6
[PATCH v2 0/6] btrfs: generic readeahead interface
This series introduces a generic readahead interface for btrfs trees.
The intention is to use it to speed up scrub in a first run, but balance
is another hot candidate. In general, every tree walk could be accompanied
by a readahead. Deletion of large files comes to mind, where the fetching
of the csums takes most of the time.
Also the initial build-ups of free-space-caches and
2011 Aug 26
0
[PATCH] Btrfs: make some functions return void
...struct btrfs_workers {
int btrfs_queue_worker(struct btrfs_workers *workers, struct btrfs_work *work);
int btrfs_start_workers(struct btrfs_workers *workers, int num_workers);
-int btrfs_stop_workers(struct btrfs_workers *workers);
+void btrfs_stop_workers(struct btrfs_workers *workers);
void btrfs_init_workers(struct btrfs_workers *workers, char *name, int max,
struct btrfs_workers *async_starter);
-int btrfs_requeue_work(struct btrfs_work *work);
+void btrfs_requeue_work(struct btrfs_work *work);
void btrfs_set_work_high_prio(struct btrfs_work *work);
#endif
diff --git a/fs/btrfs/compression.c...
2011 Mar 08
6
[PATCH v1 0/6] btrfs: scrub
This series adds an initial implementation for scrub. It works quite
straightforward. The usermode issues an ioctl for each device in the
fs. For each device, it enumerates the allocated device chunks. For
each chunk, the contained extents are enumerated and the data checksums
fetched. The extents are read sequentially and the checksums verified.
If an error occurs (checksum or EIO), a good copy
2011 Oct 04
68
[patch 00/65] Error handling patchset v3
Hi all -
Here''s my current error handling patchset, against 3.1-rc8. Almost all of
this patchset is preparing for actual error handling. Before we start in
on that work, I''m trying to reduce the surface we need to worry about. It
turns out that there is a ton of code that returns an error code but never
actually reports an error.
The patchset has grown to 65 patches. 46 of them
2011 Jun 29
14
[PATCH v4 0/6] btrfs: generic readeahead interface
This series introduces a generic readahead interface for btrfs trees.
The intention is to use it to speed up scrub in a first run, but balance
is another hot candidate. In general, every tree walk could be accompanied
by a readahead. Deletion of large files comes to mind, where the fetching
of the csums takes most of the time.
Also the initial build-ups of free-space-caches and