Alex Lyakas
2013-Aug-29 10:31 UTC
[PATCH] Notify caching_thread()s to give up on extent_commit_sem when needed.
caching_thread()s do all their work under read access to extent_commit_sem. They give up on this read access only when need_resched() tells them, or when they exit. As a result, somebody that wants a WRITE access to this sem, might wait for a long time. Especially this is problematic in cache_block_group(), which can be called on critical paths like find_free_extent() and in commit path via commit_cowonly_roots(). This patch is an RFC, that attempts to fix this problem, by notifying the caching threads to give up on extent_commit_sem. On a system with a lot of metadata (~20Gb total metadata, ~10Gb extent tree), with increased number of caching_threads, commits were very slow, stuck in commit_cowonly_roots, due to this issue. With this patch, commits no longer get stuck in commit_cowonly_roots. This patch is not indented to be applied, just a request to comment on whether you agree this problem happens, and whether the fix goes in the right direction. Signed-off-by: Alex Lyakas <alex.btrfs@zadarastorage.com> --- fs/btrfs/ctree.h | 7 +++++++ fs/btrfs/disk-io.c | 1 + fs/btrfs/extent-tree.c | 9 +++++---- fs/btrfs/transaction.c | 2 +- 4 files changed, 14 insertions(+), 5 deletions(-) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index c90be01..b602611 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1427,6 +1427,13 @@ struct btrfs_fs_info { struct mutex ordered_extent_flush_mutex; struct rw_semaphore extent_commit_sem; + /* notifies the readers to give up on the sem ASAP */ + atomic_t extent_commit_sem_give_up_read; +#define BTRFS_DOWN_WRITE_EXTENT_COMMIT_SEM(fs_info) \ + do { atomic_inc(&(fs_info)->extent_commit_sem_give_up_read); \ + down_write(&(fs_info)->extent_commit_sem); \ + atomic_dec(&(fs_info)->extent_commit_sem_give_up_read); \ + } while (0) struct rw_semaphore cleanup_work_sem; diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 69e9afb..b88e688 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -2291,6 +2291,7 @@ int open_ctree(struct super_block *sb, mutex_init(&fs_info->cleaner_mutex); mutex_init(&fs_info->volume_mutex); init_rwsem(&fs_info->extent_commit_sem); + atomic_set(&fs_info->extent_commit_sem_give_up_read, 0); init_rwsem(&fs_info->cleanup_work_sem); init_rwsem(&fs_info->subvol_sem); sema_init(&fs_info->uuid_tree_rescan_sem, 1); diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index 95c6539..28fee78 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -442,7 +442,8 @@ next: if (ret) break; - if (need_resched()) { + if (need_resched() || + atomic_read(&fs_info->extent_commit_sem_give_up_read) > 0) { caching_ctl->progress = last; btrfs_release_path(path); up_read(&fs_info->extent_commit_sem); @@ -632,7 +633,7 @@ static int cache_block_group(struct btrfs_block_group_cache *cache, return 0; } - down_write(&fs_info->extent_commit_sem); + BTRFS_DOWN_WRITE_EXTENT_COMMIT_SEM(fs_info); atomic_inc(&caching_ctl->count); list_add_tail(&caching_ctl->list, &fs_info->caching_block_groups); up_write(&fs_info->extent_commit_sem); @@ -5462,7 +5463,7 @@ void btrfs_prepare_extent_commit(struct btrfs_trans_handle *trans, struct btrfs_block_group_cache *cache; struct btrfs_space_info *space_info; - down_write(&fs_info->extent_commit_sem); + BTRFS_DOWN_WRITE_EXTENT_COMMIT_SEM(fs_info); list_for_each_entry_safe(caching_ctl, next, &fs_info->caching_block_groups, list) { @@ -8219,7 +8220,7 @@ int btrfs_free_block_groups(struct btrfs_fs_info *info) struct btrfs_caching_control *caching_ctl; struct rb_node *n; - down_write(&info->extent_commit_sem); + BTRFS_DOWN_WRITE_EXTENT_COMMIT_SEM(fs_info); while (!list_empty(&info->caching_block_groups)) { caching_ctl = list_entry(info->caching_block_groups.next, struct btrfs_caching_control, list); diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c index cac4a3f..976d20a 100644 --- a/fs/btrfs/transaction.c +++ b/fs/btrfs/transaction.c @@ -969,7 +969,7 @@ static noinline int commit_cowonly_roots(struct btrfs_trans_handle *trans, return ret; } - down_write(&fs_info->extent_commit_sem); + BTRFS_DOWN_WRITE_EXTENT_COMMIT_SEM(fs_info); switch_commit_root(fs_info->extent_root); up_write(&fs_info->extent_commit_sem); -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Josef Bacik
2013-Aug-29 14:38 UTC
Re: [PATCH] Notify caching_thread()s to give up on extent_commit_sem when needed.
On Thu, Aug 29, 2013 at 01:31:05PM +0300, Alex Lyakas wrote:> caching_thread()s do all their work under read access to extent_commit_sem. > They give up on this read access only when need_resched() tells them, or > when they exit. As a result, somebody that wants a WRITE access to this sem, > might wait for a long time. Especially this is problematic in > cache_block_group(), > which can be called on critical paths like find_free_extent() and in commit > path via commit_cowonly_roots(). > > This patch is an RFC, that attempts to fix this problem, by notifying the > caching threads to give up on extent_commit_sem. > > On a system with a lot of metadata (~20Gb total metadata, ~10Gb extent tree), > with increased number of caching_threads, commits were very slow, > stuck in commit_cowonly_roots, due to this issue. > With this patch, commits no longer get stuck in commit_cowonly_roots. >But what kind of effect do you see on overall performance/runtime? Honestly I''d expect we''d spend more of our time waiting for the caching kthread to fill in free space so we can make allocations than waiting on this lock contention. I''d like to see real numbers here to see what kind of effect this patch has on your workload. (I don''t doubt it makes a difference, I''m just curious to see how big of a difference it makes.)> This patch is not indented to be applied, just a request to comment on whether > you agree this problem happens, and whether the fix goes in the right direction. >So I think we should do 2 things here 1) Make a spin_lock for the caching ctl list. This is independant of the purpose of the extent_commit_sem, so we should lock it independantly. 2) Your idea for triggering the caching kthreads to stop what they are doing is good, but it seems like a waste of effort when we could easily check the semaphore to see if anybody is waiting on this lock. So I''m going to rig up a function in the rwsem library to do this for us, and that way we can do something like if (need_resched() || rwsem_is_contended(extent_commit_sem)) { drop and resched(); } and that way we only have to add yet another spin lock to the fs_info for the caching ctl list and we can avoid this issue. How does that sound to you? Thanks, Josef -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Alex Lyakas
2013-Aug-29 19:09 UTC
Re: [PATCH] Notify caching_thread()s to give up on extent_commit_sem when needed.
Hi Josef, On Thu, Aug 29, 2013 at 5:38 PM, Josef Bacik <jbacik@fusionio.com> wrote:> On Thu, Aug 29, 2013 at 01:31:05PM +0300, Alex Lyakas wrote: >> caching_thread()s do all their work under read access to extent_commit_sem. >> They give up on this read access only when need_resched() tells them, or >> when they exit. As a result, somebody that wants a WRITE access to this sem, >> might wait for a long time. Especially this is problematic in >> cache_block_group(), >> which can be called on critical paths like find_free_extent() and in commit >> path via commit_cowonly_roots(). >> >> This patch is an RFC, that attempts to fix this problem, by notifying the >> caching threads to give up on extent_commit_sem. >> >> On a system with a lot of metadata (~20Gb total metadata, ~10Gb extent tree), >> with increased number of caching_threads, commits were very slow, >> stuck in commit_cowonly_roots, due to this issue. >> With this patch, commits no longer get stuck in commit_cowonly_roots. >> > > But what kind of effect do you see on overall performance/runtime? Honestly I''d > expect we''d spend more of our time waiting for the caching kthread to fill in > free space so we can make allocations than waiting on this lock contention. I''d > like to see real numbers here to see what kind of effect this patch has on your > workload. (I don''t doubt it makes a difference, I''m just curious to see how big > of a difference it makes.)Primarily for me it affects the commit thread right after mounting, when it spends time in the "critical part" of the commit, in which trans_no_join is set, i.e., it is not possible to start a new transaction. So all the new writers that want a transaction are delayed at this point. Here are some numbers (and some more logs are in the attached file). Filesystem has a good amount of metadata (btrfs-progs modified slightly to print exact byte values): root@dc:/home/zadara# btrfs fi df /btrfs/pool-00000002/ Data: total=846116945920(788.01GB), used=842106667008(784.27GB) System: total=4194304(4.00MB), used=94208(92.00KB) Metadata: total=31146901504(29.01GB), used=25248698368(23.51GB) original code, 2 caching workers, try 1 Aug 29 13:41:22 dc kernel: [28381.203745] [17617][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_STARTED:439] FS[dm-119] txn[6627] COMMIT extwr:0 wr:1 Aug 29 13:41:25 dc kernel: [28384.624838] [17617][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:519] FS[dm-119] txn[6627] COMMIT took 3421 ms committers=1 open=0ms blocked=3188ms Aug 29 13:41:25 dc kernel: [28384.624846] [17617][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:524] FS[dm-119] txn[6627] roo:0 rdr1:0 cbg:0 rdr2:0 Aug 29 13:41:25 dc kernel: [28384.624850] [17617][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:529] FS[dm-119] txn[6627] wc:0 wpc:0 wew:0 fps:0 Aug 29 13:41:25 dc kernel: [28384.624854] [17617][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:534] -FS[dm-119] txn[6627] ww:0 cs:0 rdi:0 rdr3:0 Aug 29 13:41:25 dc kernel: [28384.624858] [17617][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:538] -FS[dm-119] txn[6627] cfr:0 ccr:2088 pec:1099 Aug 29 13:41:25 dc kernel: [28384.624862] [17617][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:541] FS[dm-119] txn[6627] wrw:230 wrs:1 I have a breakdown of commit times here, to identify bottlenecks of the commit. Times are in ms. Names of phases are: roo - btrfs_run_ordered_operations rdr1 - btrfs_run_delayed_refs (call 1) cbg - btrfs_create_pending_block_groups rdr2 - btrfs_run_delayed_refs (call 2) wc - wait_for_commit (if was needed) wpc - wair for previous commit (if was needed) wew - wait for "external writers to detach" fps - flush_all_pending_stuffs ww - wait for all the other writers to detach cs - create_pending_snapshots rdi - btrfs_run_delayed_items rdr3 - btrfs_run_delayed_refs (call 3) cfr - commit_fs_roots ccr - commit_cowonly_roots pec - btrfs_prepare_extent_commit wrw - btrfs_write_and_wait_transaction wrs - write_ctree_super Two lines marked as "-" are the "critical part" of the commit. original code, 2 caching workers, try 2 Aug 29 13:43:30 dc kernel: [28508.683625] [22490][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_STARTED:439] FS[dm-119] txn[6630] COMMIT extwr:0 wr:1 Aug 29 13:43:31 dc kernel: [28510.569269] [22490][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:519] FS[dm-119] txn[6630] COMMIT took 1885 ms committers=1 open=0ms blocked=1550ms Aug 29 13:43:31 dc kernel: [28510.569276] [22490][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:524] FS[dm-119] txn[6630] roo:0 rdr1:0 cbg:0 rdr2:0 Aug 29 13:43:31 dc kernel: [28510.569281] [22490][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:529] FS[dm-119] txn[6630] wc:0 wpc:0 wew:0 fps:0 Aug 29 13:43:31 dc kernel: [28510.569285] [22490][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:534] -FS[dm-119] txn[6630] ww:0 cs:0 rdi:0 rdr3:0 Aug 29 13:43:31 dc kernel: [28510.569288] [22490][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:538] -FS[dm-119] txn[6630] cfr:0 ccr:1550 pec:0 Aug 29 13:43:31 dc kernel: [28510.569292] [22490][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:541] FS[dm-119] txn[6630] wrw:333 wrs:1 So you see that 1-2 secs are spent in "commit cowonly roots". Now the patched code, and here, I admit, difference is not so dramatic: patched code, 2 caching workers, try 1 Aug 29 14:08:19 dc kernel: [29997.819307] [24783][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_STARTED:439] FS[dm-119] txn[6642] COMMIT extwr:0 wr:1 Aug 29 14:08:20 dc kernel: [29998.800342] [24783][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:519] FS[dm-119] txn[6642] COMMIT took 981 ms committers=1 open=0ms blocked=881ms Aug 29 14:08:20 dc kernel: [29998.800350] [24783][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:524] FS[dm-119] txn[6642] roo:0 rdr1:0 cbg:0 rdr2:0 Aug 29 14:08:20 dc kernel: [29998.800354] [24783][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:529] FS[dm-119] txn[6642] wc:0 wpc:0 wew:0 fps:0 Aug 29 14:08:20 dc kernel: [29998.800358] [24783][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:534] -FS[dm-119] txn[6642] ww:0 cs:0 rdi:0 rdr3:0 Aug 29 14:08:20 dc kernel: [29998.800362] [24783][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:538] -FS[dm-119] txn[6642] cfr:0 ccr:880 pec:1 Aug 29 14:08:20 dc kernel: [29998.800365] [24783][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:541] FS[dm-119] txn[6642] wrw:98 wrs:1 patched code, 2 caching workers, try 2 Aug 29 14:09:18 dc kernel: [30057.375432] [24781][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_STARTED:439] FS[dm-119] txn[6645] COMMIT extwr:0 wr:1 Aug 29 14:09:19 dc kernel: [30058.079811] [24781][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:519] FS[dm-119] txn[6645] COMMIT took 704 ms committers=1 open=0ms blocked=643ms Aug 29 14:09:19 dc kernel: [30058.079820] [24781][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:524] FS[dm-119] txn[6645] roo:0 rdr1:0 cbg:0 rdr2:0 Aug 29 14:09:19 dc kernel: [30058.079824] [24781][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:529] FS[dm-119] txn[6645] wc:0 wpc:0 wew:0 fps:0 Aug 29 14:09:19 dc kernel: [30058.079828] [24781][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:534] -FS[dm-119] txn[6645] ww:0 cs:0 rdi:0 rdr3:0 Aug 29 14:09:19 dc kernel: [30058.079832] [24781][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:538] -FS[dm-119] txn[6645] cfr:0 ccr:642 pec:1 Aug 29 14:09:19 dc kernel: [30058.079836] [24781][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:541] FS[dm-119] txn[6645] wrw:59 wrs:0 but still there is some improvement of commit time. Now I changed the number of caching workers to 32, to improve the time to load the metadata, otherwise it takes a lot of time for the FS to become responsive. Also I modified the code to start caching workers like this: btrfs_init_workers(&fs_info->caching_workers, "cache", 32, NULL/*async_helper*/); /* use low thresh to quickly spawn needed new threads */ fs_info->caching_workers.idle_thresh = 2; As I explained in my other email named "btrfs:async-thread: atomic_start_pending=1 is set, but it''s too late", even with two caching threads, only one does all the job. So I don''t pass the async helper (don''t know if that''s a correct thing to do). So: original code, 32 caching workers, try 1 Aug 29 13:53:56 dc kernel: [29135.456301] [539][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_STARTED:439] FS[dm-119] txn[6636] COMMIT extwr:0 wr:1 Aug 29 13:54:56 dc kernel: [29195.561173] [539][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:519] FS[dm-119] txn[6636] COMMIT took 60104 ms committers=1 open=0ms blocked=60049ms Aug 29 13:54:56 dc kernel: [29195.561187] [539][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:524] FS[dm-119] txn[6636] roo:0 rdr1:0 cbg:0 rdr2:0 Aug 29 13:54:56 dc kernel: [29195.561201] [539][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:529] FS[dm-119] txn[6636] wc:0 wpc:0 wew:0 fps:0 Aug 29 13:54:56 dc kernel: [29195.561216] [539][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:534] -FS[dm-119] txn[6636] ww:0 cs:0 rdi:0 rdr3:0 Aug 29 13:54:56 dc kernel: [29195.561220] [539][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:538] -FS[dm-119] txn[6636] cfr:0 ccr:59163 pec:885 Aug 29 13:54:56 dc kernel: [29195.561224] [539][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:541] FS[dm-119] txn[6636] wrw:54 wrs:1 60 seconds to commit, out of which 59 are in commit_cowonly_roots. original code, 32 caching workers, try 2 Aug 29 13:56:54 dc kernel: [29312.747760] [6121][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_STARTED:439] FS[dm-119] txn[6639] COMMIT extwr:0 wr:1 Aug 29 13:58:15 dc kernel: [29394.289640] [6121][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:519] FS[dm-119] txn[6639] COMMIT took 81541 ms committers=1 open=0ms blocked=81396ms Aug 29 13:58:15 dc kernel: [29394.289649] [6121][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:524] FS[dm-119] txn[6639] roo:0 rdr1:0 cbg:0 rdr2:0 Aug 29 13:58:15 dc kernel: [29394.289688] [6121][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:529] FS[dm-119] txn[6639] wc:0 wpc:0 wew:0 fps:0 Aug 29 13:58:15 dc kernel: [29394.289694] [6121][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:534] -FS[dm-119] txn[6639] ww:0 cs:0 rdi:0 rdr3:0 Aug 29 13:58:15 dc kernel: [29394.289700] [6121][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:538] -FS[dm-119] txn[6639] cfr:0 ccr:80309 pec:1086 Aug 29 13:58:15 dc kernel: [29394.289705] [6121][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:541] FS[dm-119] txn[6639] wrw:140 wrs:5 81 seconds to commit, out of which 80 seconds in commt_cowonly_roots! Now the patched code with 32 threads: patched code, 32 caching workers - try 1 Aug 29 14:12:29 dc kernel: [30248.074275] [1696][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_STARTED:439] FS[dm-119] txn[6648] COMMIT extwr:0 wr:1 Aug 29 14:12:31 dc kernel: [30249.974844] [1696][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:519] FS[dm-119] txn[6648] COMMIT took 1900 ms committers=1 open=0ms blocked=1725ms Aug 29 14:12:31 dc kernel: [30249.974851] [1696][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:524] FS[dm-119] txn[6648] roo:0 rdr1:0 cbg:0 rdr2:0 Aug 29 14:12:31 dc kernel: [30249.974855] [1696][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:529] FS[dm-119] txn[6648] wc:0 wpc:0 wew:0 fps:0 Aug 29 14:12:31 dc kernel: [30249.974859] [1696][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:534] -FS[dm-119] txn[6648] ww:0 cs:0 rdi:0 rdr3:0 Aug 29 14:12:31 dc kernel: [30249.974863] [1696][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:538] -FS[dm-119] txn[6648] cfr:0 ccr:1720 pec:5 Aug 29 14:12:31 dc kernel: [30249.974867] [1696][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:541] FS[dm-119] txn[6648] wrw:169 wrs:4 patched code, 32 caching workers, try 2 Aug 29 14:13:35 dc kernel: [30314.378026] [1698][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:519] FS[dm-119] txn[6651] COMMIT took 1117 ms committers=1 open=0ms blocked=999ms Aug 29 14:13:35 dc kernel: [30314.378033] [1698][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:524] FS[dm-119] txn[6651] roo:0 rdr1:0 cbg:0 rdr2:0 Aug 29 14:13:35 dc kernel: [30314.378041] [1698][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:529] FS[dm-119] txn[6651] wc:0 wpc:0 wew:0 fps:0 Aug 29 14:13:35 dc kernel: [30314.378132] [1698][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:534] -FS[dm-119] txn[6651] ww:0 cs:0 rdi:0 rdr3:0 Aug 29 14:13:35 dc kernel: [30314.378136] [1698][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:538] -FS[dm-119] txn[6651] cfr:0 ccr:994 pec:4 Aug 29 14:13:35 dc kernel: [30314.378140] [1698][tx]btrfs [ZBTRFS_TXN_COMMIT_PHASE_DONE:541] FS[dm-119] txn[6651] wrw:115 wrs:2 much better. (And I promise these prints are real). In the attached log, there are some prints that let you see how much time it takes to queue a caching request with the old-vs-new code.> >> This patch is not indented to be applied, just a request to comment on whether >> you agree this problem happens, and whether the fix goes in the right direction. >> > > So I think we should do 2 things here > > 1) Make a spin_lock for the caching ctl list. This is independant of the > purpose of the extent_commit_sem, so we should lock it independantly.Yes, I also thought that the caching ctl list should be protected by some other quicker lock, but in the code it is always accessed under this rwsem.> > 2) Your idea for triggering the caching kthreads to stop what they are doing is > good, but it seems like a waste of effort when we could easily check the > semaphore to see if anybody is waiting on this lock. So I''m going to rig up a > function in the rwsem library to do this for us, and that way we can do > something like > > if (need_resched() || rwsem_is_contended(extent_commit_sem)) { > drop and resched(); > } >Perfect! Will it be accepted to mainline just like that?:) Cool!> and that way we only have to add yet another spin lock to the fs_info for the > caching ctl list and we can avoid this issue. How does that sound to you?Awesome! Thanks, Alex.> Thanks, > > Josef
Josef Bacik
2013-Aug-29 19:55 UTC
Re: [PATCH] Notify caching_thread()s to give up on extent_commit_sem when needed.
On Thu, Aug 29, 2013 at 10:09:29PM +0300, Alex Lyakas wrote:> Hi Josef, > > On Thu, Aug 29, 2013 at 5:38 PM, Josef Bacik <jbacik@fusionio.com> wrote: > > On Thu, Aug 29, 2013 at 01:31:05PM +0300, Alex Lyakas wrote: > >> caching_thread()s do all their work under read access to extent_commit_sem. > >> They give up on this read access only when need_resched() tells them, or > >> when they exit. As a result, somebody that wants a WRITE access to this sem, > >> might wait for a long time. Especially this is problematic in > >> cache_block_group(), > >> which can be called on critical paths like find_free_extent() and in commit > >> path via commit_cowonly_roots(). > >> > >> This patch is an RFC, that attempts to fix this problem, by notifying the > >> caching threads to give up on extent_commit_sem. > >> > >> On a system with a lot of metadata (~20Gb total metadata, ~10Gb extent tree), > >> with increased number of caching_threads, commits were very slow, > >> stuck in commit_cowonly_roots, due to this issue. > >> With this patch, commits no longer get stuck in commit_cowonly_roots. > >> > > > > But what kind of effect do you see on overall performance/runtime? Honestly I''d > > expect we''d spend more of our time waiting for the caching kthread to fill in > > free space so we can make allocations than waiting on this lock contention. I''d > > like to see real numbers here to see what kind of effect this patch has on your > > workload. (I don''t doubt it makes a difference, I''m just curious to see how big > > of a difference it makes.) > > Primarily for me it affects the commit thread right after mounting, > when it spends time in the "critical part" of the commit, in which > trans_no_join is set, i.e., it is not possible to start a new > transaction. So all the new writers that want a transaction are > delayed at this point. > > Here are some numbers (and some more logs are in the attached file). > > Filesystem has a good amount of metadata (btrfs-progs modified > slightly to print exact byte values): > root@dc:/home/zadara# btrfs fi df /btrfs/pool-00000002/ > Data: total=846116945920(788.01GB), used=842106667008(784.27GB) > System: total=4194304(4.00MB), used=94208(92.00KB) > Metadata: total=31146901504(29.01GB), used=25248698368(23.51GB) > > original code, 2 caching workers, try 1 > Aug 29 13:41:22 dc kernel: [28381.203745] [17617][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_STARTED:439] FS[dm-119] txn[6627] COMMIT > extwr:0 wr:1 > Aug 29 13:41:25 dc kernel: [28384.624838] [17617][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:519] FS[dm-119] txn[6627] COMMIT took > 3421 ms committers=1 open=0ms blocked=3188ms > Aug 29 13:41:25 dc kernel: [28384.624846] [17617][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:524] FS[dm-119] txn[6627] roo:0 rdr1:0 > cbg:0 rdr2:0 > Aug 29 13:41:25 dc kernel: [28384.624850] [17617][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:529] FS[dm-119] txn[6627] wc:0 wpc:0 > wew:0 fps:0 > Aug 29 13:41:25 dc kernel: [28384.624854] [17617][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:534] -FS[dm-119] txn[6627] ww:0 cs:0 > rdi:0 rdr3:0 > Aug 29 13:41:25 dc kernel: [28384.624858] [17617][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:538] -FS[dm-119] txn[6627] cfr:0 > ccr:2088 pec:1099 > Aug 29 13:41:25 dc kernel: [28384.624862] [17617][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:541] FS[dm-119] txn[6627] wrw:230 wrs:1 > > I have a breakdown of commit times here, to identify bottlenecks of > the commit. Times are in ms. > Names of phases are: > > roo - btrfs_run_ordered_operations > rdr1 - btrfs_run_delayed_refs (call 1) > cbg - btrfs_create_pending_block_groups > rdr2 - btrfs_run_delayed_refs (call 2) > wc - wait_for_commit (if was needed) > wpc - wair for previous commit (if was needed) > wew - wait for "external writers to detach" > fps - flush_all_pending_stuffs > ww - wait for all the other writers to detach > cs - create_pending_snapshots > rdi - btrfs_run_delayed_items > rdr3 - btrfs_run_delayed_refs (call 3) > cfr - commit_fs_roots > ccr - commit_cowonly_roots > pec - btrfs_prepare_extent_commit > wrw - btrfs_write_and_wait_transaction > wrs - write_ctree_super > > Two lines marked as "-" are the "critical part" of the commit. > > > original code, 2 caching workers, try 2 > Aug 29 13:43:30 dc kernel: [28508.683625] [22490][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_STARTED:439] FS[dm-119] txn[6630] COMMIT > extwr:0 wr:1 > Aug 29 13:43:31 dc kernel: [28510.569269] [22490][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:519] FS[dm-119] txn[6630] COMMIT took > 1885 ms committers=1 open=0ms blocked=1550ms > Aug 29 13:43:31 dc kernel: [28510.569276] [22490][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:524] FS[dm-119] txn[6630] roo:0 rdr1:0 > cbg:0 rdr2:0 > Aug 29 13:43:31 dc kernel: [28510.569281] [22490][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:529] FS[dm-119] txn[6630] wc:0 wpc:0 > wew:0 fps:0 > Aug 29 13:43:31 dc kernel: [28510.569285] [22490][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:534] -FS[dm-119] txn[6630] ww:0 cs:0 > rdi:0 rdr3:0 > Aug 29 13:43:31 dc kernel: [28510.569288] [22490][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:538] -FS[dm-119] txn[6630] cfr:0 > ccr:1550 pec:0 > Aug 29 13:43:31 dc kernel: [28510.569292] [22490][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:541] FS[dm-119] txn[6630] wrw:333 wrs:1 > > So you see that 1-2 secs are spent in "commit cowonly roots". Now the > patched code, and here, I admit, difference is not so dramatic: > > patched code, 2 caching workers, try 1 > Aug 29 14:08:19 dc kernel: [29997.819307] [24783][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_STARTED:439] FS[dm-119] txn[6642] COMMIT > extwr:0 wr:1 > Aug 29 14:08:20 dc kernel: [29998.800342] [24783][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:519] FS[dm-119] txn[6642] COMMIT took > 981 ms committers=1 open=0ms blocked=881ms > Aug 29 14:08:20 dc kernel: [29998.800350] [24783][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:524] FS[dm-119] txn[6642] roo:0 rdr1:0 > cbg:0 rdr2:0 > Aug 29 14:08:20 dc kernel: [29998.800354] [24783][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:529] FS[dm-119] txn[6642] wc:0 wpc:0 > wew:0 fps:0 > Aug 29 14:08:20 dc kernel: [29998.800358] [24783][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:534] -FS[dm-119] txn[6642] ww:0 cs:0 > rdi:0 rdr3:0 > Aug 29 14:08:20 dc kernel: [29998.800362] [24783][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:538] -FS[dm-119] txn[6642] cfr:0 ccr:880 > pec:1 > Aug 29 14:08:20 dc kernel: [29998.800365] [24783][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:541] FS[dm-119] txn[6642] wrw:98 wrs:1 > > patched code, 2 caching workers, try 2 > Aug 29 14:09:18 dc kernel: [30057.375432] [24781][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_STARTED:439] FS[dm-119] txn[6645] COMMIT > extwr:0 wr:1 > Aug 29 14:09:19 dc kernel: [30058.079811] [24781][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:519] FS[dm-119] txn[6645] COMMIT took > 704 ms committers=1 open=0ms blocked=643ms > Aug 29 14:09:19 dc kernel: [30058.079820] [24781][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:524] FS[dm-119] txn[6645] roo:0 rdr1:0 > cbg:0 rdr2:0 > Aug 29 14:09:19 dc kernel: [30058.079824] [24781][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:529] FS[dm-119] txn[6645] wc:0 wpc:0 > wew:0 fps:0 > Aug 29 14:09:19 dc kernel: [30058.079828] [24781][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:534] -FS[dm-119] txn[6645] ww:0 cs:0 > rdi:0 rdr3:0 > Aug 29 14:09:19 dc kernel: [30058.079832] [24781][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:538] -FS[dm-119] txn[6645] cfr:0 ccr:642 > pec:1 > Aug 29 14:09:19 dc kernel: [30058.079836] [24781][tx]btrfs > [ZBTRFS_TXN_COMMIT_PHASE_DONE:541] FS[dm-119] txn[6645] wrw:59 wrs:0 > > but still there is some improvement of commit time. >Ok so that''s what I expected, a marginal improvement under normal conditions.> Now I changed the number of caching workers to 32, to improve the time > to load the metadata, otherwise it takes a lot of time for the FS to > become responsive. Also I modified the code to start caching workers > like this: > btrfs_init_workers(&fs_info->caching_workers, "cache", > 32, NULL/*async_helper*/); > /* use low thresh to quickly spawn needed new threads */ > fs_info->caching_workers.idle_thresh = 2; > > As I explained in my other email named "btrfs:async-thread: > atomic_start_pending=1 is set, but it''s too late", even with two > caching threads, only one does all the job. So I don''t pass the async > helper (don''t know if that''s a correct thing to do). >So does jacking the limit up to 32 help the load? The old "only let 2 cachers go at once" stuff was because we used to do the caching on the normal extent tree, so we took locks and it killed performance. But now that we use the commit root there is really no reason to limit our caching threads, other than the fact we''ll be thrashing the disk by all of the threads reading so much. I''ll look at the other email you sent, I''ve not read it. Thanks, Josef -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Alex Lyakas
2013-Aug-29 22:18 UTC
Re: [PATCH] Notify caching_thread()s to give up on extent_commit_sem when needed.
On Thu, Aug 29, 2013 at 10:55 PM, Josef Bacik <jbacik@fusionio.com> wrote:> On Thu, Aug 29, 2013 at 10:09:29PM +0300, Alex Lyakas wrote: >> Hi Josef, >> >> On Thu, Aug 29, 2013 at 5:38 PM, Josef Bacik <jbacik@fusionio.com> wrote: >> > On Thu, Aug 29, 2013 at 01:31:05PM +0300, Alex Lyakas wrote: >> >> caching_thread()s do all their work under read access to extent_commit_sem. >> >> They give up on this read access only when need_resched() tells them, or >> >> when they exit. As a result, somebody that wants a WRITE access to this sem, >> >> might wait for a long time. Especially this is problematic in >> >> cache_block_group(), >> >> which can be called on critical paths like find_free_extent() and in commit >> >> path via commit_cowonly_roots(). >> >> >> >> This patch is an RFC, that attempts to fix this problem, by notifying the >> >> caching threads to give up on extent_commit_sem. >> >> >> >> On a system with a lot of metadata (~20Gb total metadata, ~10Gb extent tree), >> >> with increased number of caching_threads, commits were very slow, >> >> stuck in commit_cowonly_roots, due to this issue. >> >> With this patch, commits no longer get stuck in commit_cowonly_roots. >> >> >> > >> > But what kind of effect do you see on overall performance/runtime? Honestly I''d >> > expect we''d spend more of our time waiting for the caching kthread to fill in >> > free space so we can make allocations than waiting on this lock contention. I''d >> > like to see real numbers here to see what kind of effect this patch has on your >> > workload. (I don''t doubt it makes a difference, I''m just curious to see how big >> > of a difference it makes.) >> >> Primarily for me it affects the commit thread right after mounting, >> when it spends time in the "critical part" of the commit, in which >> trans_no_join is set, i.e., it is not possible to start a new >> transaction. So all the new writers that want a transaction are >> delayed at this point. >> >> Here are some numbers (and some more logs are in the attached file). >> >> Filesystem has a good amount of metadata (btrfs-progs modified >> slightly to print exact byte values): >> root@dc:/home/zadara# btrfs fi df /btrfs/pool-00000002/ >> Data: total=846116945920(788.01GB), used=842106667008(784.27GB) >> System: total=4194304(4.00MB), used=94208(92.00KB) >> Metadata: total=31146901504(29.01GB), used=25248698368(23.51GB) >> >> original code, 2 caching workers, try 1 >> Aug 29 13:41:22 dc kernel: [28381.203745] [17617][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_STARTED:439] FS[dm-119] txn[6627] COMMIT >> extwr:0 wr:1 >> Aug 29 13:41:25 dc kernel: [28384.624838] [17617][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:519] FS[dm-119] txn[6627] COMMIT took >> 3421 ms committers=1 open=0ms blocked=3188ms >> Aug 29 13:41:25 dc kernel: [28384.624846] [17617][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:524] FS[dm-119] txn[6627] roo:0 rdr1:0 >> cbg:0 rdr2:0 >> Aug 29 13:41:25 dc kernel: [28384.624850] [17617][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:529] FS[dm-119] txn[6627] wc:0 wpc:0 >> wew:0 fps:0 >> Aug 29 13:41:25 dc kernel: [28384.624854] [17617][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:534] -FS[dm-119] txn[6627] ww:0 cs:0 >> rdi:0 rdr3:0 >> Aug 29 13:41:25 dc kernel: [28384.624858] [17617][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:538] -FS[dm-119] txn[6627] cfr:0 >> ccr:2088 pec:1099 >> Aug 29 13:41:25 dc kernel: [28384.624862] [17617][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:541] FS[dm-119] txn[6627] wrw:230 wrs:1 >> >> I have a breakdown of commit times here, to identify bottlenecks of >> the commit. Times are in ms. >> Names of phases are: >> >> roo - btrfs_run_ordered_operations >> rdr1 - btrfs_run_delayed_refs (call 1) >> cbg - btrfs_create_pending_block_groups >> rdr2 - btrfs_run_delayed_refs (call 2) >> wc - wait_for_commit (if was needed) >> wpc - wair for previous commit (if was needed) >> wew - wait for "external writers to detach" >> fps - flush_all_pending_stuffs >> ww - wait for all the other writers to detach >> cs - create_pending_snapshots >> rdi - btrfs_run_delayed_items >> rdr3 - btrfs_run_delayed_refs (call 3) >> cfr - commit_fs_roots >> ccr - commit_cowonly_roots >> pec - btrfs_prepare_extent_commit >> wrw - btrfs_write_and_wait_transaction >> wrs - write_ctree_super >> >> Two lines marked as "-" are the "critical part" of the commit. >> >> >> original code, 2 caching workers, try 2 >> Aug 29 13:43:30 dc kernel: [28508.683625] [22490][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_STARTED:439] FS[dm-119] txn[6630] COMMIT >> extwr:0 wr:1 >> Aug 29 13:43:31 dc kernel: [28510.569269] [22490][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:519] FS[dm-119] txn[6630] COMMIT took >> 1885 ms committers=1 open=0ms blocked=1550ms >> Aug 29 13:43:31 dc kernel: [28510.569276] [22490][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:524] FS[dm-119] txn[6630] roo:0 rdr1:0 >> cbg:0 rdr2:0 >> Aug 29 13:43:31 dc kernel: [28510.569281] [22490][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:529] FS[dm-119] txn[6630] wc:0 wpc:0 >> wew:0 fps:0 >> Aug 29 13:43:31 dc kernel: [28510.569285] [22490][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:534] -FS[dm-119] txn[6630] ww:0 cs:0 >> rdi:0 rdr3:0 >> Aug 29 13:43:31 dc kernel: [28510.569288] [22490][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:538] -FS[dm-119] txn[6630] cfr:0 >> ccr:1550 pec:0 >> Aug 29 13:43:31 dc kernel: [28510.569292] [22490][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:541] FS[dm-119] txn[6630] wrw:333 wrs:1 >> >> So you see that 1-2 secs are spent in "commit cowonly roots". Now the >> patched code, and here, I admit, difference is not so dramatic: >> >> patched code, 2 caching workers, try 1 >> Aug 29 14:08:19 dc kernel: [29997.819307] [24783][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_STARTED:439] FS[dm-119] txn[6642] COMMIT >> extwr:0 wr:1 >> Aug 29 14:08:20 dc kernel: [29998.800342] [24783][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:519] FS[dm-119] txn[6642] COMMIT took >> 981 ms committers=1 open=0ms blocked=881ms >> Aug 29 14:08:20 dc kernel: [29998.800350] [24783][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:524] FS[dm-119] txn[6642] roo:0 rdr1:0 >> cbg:0 rdr2:0 >> Aug 29 14:08:20 dc kernel: [29998.800354] [24783][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:529] FS[dm-119] txn[6642] wc:0 wpc:0 >> wew:0 fps:0 >> Aug 29 14:08:20 dc kernel: [29998.800358] [24783][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:534] -FS[dm-119] txn[6642] ww:0 cs:0 >> rdi:0 rdr3:0 >> Aug 29 14:08:20 dc kernel: [29998.800362] [24783][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:538] -FS[dm-119] txn[6642] cfr:0 ccr:880 >> pec:1 >> Aug 29 14:08:20 dc kernel: [29998.800365] [24783][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:541] FS[dm-119] txn[6642] wrw:98 wrs:1 >> >> patched code, 2 caching workers, try 2 >> Aug 29 14:09:18 dc kernel: [30057.375432] [24781][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_STARTED:439] FS[dm-119] txn[6645] COMMIT >> extwr:0 wr:1 >> Aug 29 14:09:19 dc kernel: [30058.079811] [24781][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:519] FS[dm-119] txn[6645] COMMIT took >> 704 ms committers=1 open=0ms blocked=643ms >> Aug 29 14:09:19 dc kernel: [30058.079820] [24781][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:524] FS[dm-119] txn[6645] roo:0 rdr1:0 >> cbg:0 rdr2:0 >> Aug 29 14:09:19 dc kernel: [30058.079824] [24781][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:529] FS[dm-119] txn[6645] wc:0 wpc:0 >> wew:0 fps:0 >> Aug 29 14:09:19 dc kernel: [30058.079828] [24781][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:534] -FS[dm-119] txn[6645] ww:0 cs:0 >> rdi:0 rdr3:0 >> Aug 29 14:09:19 dc kernel: [30058.079832] [24781][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:538] -FS[dm-119] txn[6645] cfr:0 ccr:642 >> pec:1 >> Aug 29 14:09:19 dc kernel: [30058.079836] [24781][tx]btrfs >> [ZBTRFS_TXN_COMMIT_PHASE_DONE:541] FS[dm-119] txn[6645] wrw:59 wrs:0 >> >> but still there is some improvement of commit time. >> > > Ok so that''s what I expected, a marginal improvement under normal conditions. > >> Now I changed the number of caching workers to 32, to improve the time >> to load the metadata, otherwise it takes a lot of time for the FS to >> become responsive. Also I modified the code to start caching workers >> like this: >> btrfs_init_workers(&fs_info->caching_workers, "cache", >> 32, NULL/*async_helper*/); >> /* use low thresh to quickly spawn needed new threads */ >> fs_info->caching_workers.idle_thresh = 2; >> >> As I explained in my other email named "btrfs:async-thread: >> atomic_start_pending=1 is set, but it''s too late", even with two >> caching threads, only one does all the job. So I don''t pass the async >> helper (don''t know if that''s a correct thing to do). >> > > So does jacking the limit up to 32 help the load? The old "only let 2 cachers > go at once" stuff was because we used to do the caching on the normal extent > tree, so we took locks and it killed performance. But now that we use the > commit root there is really no reason to limit our caching threads, other than > the fact we''ll be thrashing the disk by all of the threads reading so much. > I''ll look at the other email you sent, I''ve not read it. Thanks,What we see is that a single caching_thread, with an underlying SSD, can generate only up to 4Mb/s of reading-in the pages, because it reads them synchronously, one-by-one as it scans its part of the extent tree. Whereas with 32 threads, we are able to see 40Mb/s of caching-in the block groups. So yes, the system becomes more responsive on SOD (start-of-day), where it needs to "warm-up" the free space cache, which is exactly a critical period of time, when host is waiting to resume the IO. What you did not ask me (and thanks for that!) is why not to use the free-space-cache, which is supposed to solve this exact problem. Apart from one stack overrun we hit with it (that was fixed in later kernels), we see that free space EXTENT_DATAs are allocated out of DATA block groups. For us, this is somewhat an issue, because we are routing METADATA IOs to a faster SSD storage. I haven''t yet looked deeper into whether it is straightforward to alloc free-space-entries out of DATA block groups, but do you think this would make sense? After all, free-space entries are metadata... Thanks! Alex.> > Josef-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Maybe Matching Threads
- Mirror of SAN Boxes with ZFS ? (split site mirror)
- Rollbacks, Sqlite3 bug. Has this been reintroduced ?
- [LLVMdev] interest in support for Transactional Memory?
- Proof of concept for GPU forwarding for Linux guest on Linux host.
- C6.7 evolution to cyrus imap(s) fails