Filipe David Borba Manana
2013-Aug-29 12:44 UTC
[PATCH] Btrfs: optimize key searches in btrfs_search_slot
When the binary search returns 0 (exact match), the target key will necessarily be at slot 0 of all nodes below the current one, so in this case the binary search is not needed because it will always return 0, and we waste time doing it, holding node locks for longer than necessary, etc. Below follow histograms with the times spent on the current approach of doing a binary search when the previous binary search returned 0, and times for the new approach, which directly picks the first item/child node in the leaf/node. Current approach: Count: 5013 Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972 Percentiles: 90th: 141.000; 95th: 182.000; 99th: 287.000 25.000 - 33.930: 211 ###### 33.930 - 45.927: 277 ######## 45.927 - 62.045: 1834 ##################################################### 62.045 - 83.699: 1203 ################################### 83.699 - 112.789: 609 ################## 112.789 - 151.872: 450 ############# 151.872 - 204.377: 246 ####### 204.377 - 274.917: 124 #### 274.917 - 369.684: 48 # 369.684 - 497.000: 11 | Approach proposed by this patch: Count: 5013 Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147 Percentiles: 90th: 49.000; 95th: 74.000; 99th: 115.000 10.000 - 20.339: 3160 ##################################################### 20.339 - 40.397: 1131 ################### 40.397 - 79.308: 507 ######### 79.308 - 154.794: 199 ### 154.794 - 301.232: 14 | 301.232 - 585.313: 1 | 585.313 - 8303.000: 1 | These samples were captured during a run of the btrfs tests 001, 002 and 004 in the xfstests, with a leaf/node size of 4Kb. Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> --- fs/btrfs/ctree.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 59 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c index 5fa521b..5b20eec 100644 --- a/fs/btrfs/ctree.c +++ b/fs/btrfs/ctree.c @@ -2426,6 +2426,59 @@ done: return ret; } +static int key_search(struct extent_buffer *b, struct btrfs_key *key, + int level, int *prev_cmp, int *slot) +{ + unsigned long eb_offset = 0; + unsigned long len_left = b->len; + char *kaddr = NULL; + unsigned long map_start = 0; + unsigned long map_len = 0; + unsigned long offset; + struct btrfs_disk_key *k = NULL; + struct btrfs_disk_key unaligned; + + if (*prev_cmp != 0) { + *prev_cmp = bin_search(b, key, level, slot); + return *prev_cmp; + } + + if (level == 0) + offset = offsetof(struct btrfs_leaf, items); + else + offset = offsetof(struct btrfs_node, ptrs); + + /* + * Map the entire extent buffer, otherwise callers can''t access + * all keys/items of the leaf/node. Specially needed for case + * where leaf/node size is greater than page cache size. + */ + while (len_left > 0) { + unsigned long len = min(PAGE_CACHE_SIZE, len_left); + int err; + + err = map_private_extent_buffer(b, eb_offset, len, &kaddr, + &map_start, &map_len); + len_left -= len; + eb_offset += len; + if (k) + continue; + if (!err) { + k = (struct btrfs_disk_key *)(kaddr + offset - + map_start); + } else { + read_extent_buffer(b, &unaligned, + offset, sizeof(unaligned)); + k = &unaligned; + } + } + + BUG_ON(comp_keys(k, key) != 0); + *slot = 0; + + return 0; +} + /* * look for key in the tree. path is filled in with nodes along the way * if key is found, we return zero and you can find the item in the leaf @@ -2454,6 +2507,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root int write_lock_level = 0; u8 lowest_level = 0; int min_write_lock_level; + int prev_cmp; lowest_level = p->lowest_level; WARN_ON(lowest_level && ins_len > 0); @@ -2484,6 +2538,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root min_write_lock_level = write_lock_level; again: + prev_cmp = -1; /* * we try very hard to do read locks on the root */ @@ -2584,7 +2639,7 @@ cow_done: if (!cow) btrfs_unlock_up_safe(p, level + 1); - ret = bin_search(b, key, level, &slot); + ret = key_search(b, key, level, &prev_cmp, &slot); if (level != 0) { int dec = 0; @@ -2719,6 +2774,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, int level; int lowest_unlock = 1; u8 lowest_level = 0; + int prev_cmp; lowest_level = p->lowest_level; WARN_ON(p->nodes[0] != NULL); @@ -2729,6 +2785,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, } again: + prev_cmp = -1; b = get_old_root(root, time_seq); level = btrfs_header_level(b); p->locks[level] = BTRFS_READ_LOCK; @@ -2746,7 +2803,7 @@ again: */ btrfs_unlock_up_safe(p, level + 1); - ret = bin_search(b, key, level, &slot); + ret = key_search(b, key, level, &prev_cmp, &slot); if (level != 0) { int dec = 0; -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Filipe David Borba Manana
2013-Aug-29 13:42 UTC
[PATCH v2] Btrfs: optimize key searches in btrfs_search_slot
When the binary search returns 0 (exact match), the target key will necessarily be at slot 0 of all nodes below the current one, so in this case the binary search is not needed because it will always return 0, and we waste time doing it, holding node locks for longer than necessary, etc. Below follow histograms with the times spent on the current approach of doing a binary search when the previous binary search returned 0, and times for the new approach, which directly picks the first item/child node in the leaf/node. Current approach: Count: 5013 Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972 Percentiles: 90th: 141.000; 95th: 182.000; 99th: 287.000 25.000 - 33.930: 211 ###### 33.930 - 45.927: 277 ######## 45.927 - 62.045: 1834 ##################################################### 62.045 - 83.699: 1203 ################################### 83.699 - 112.789: 609 ################## 112.789 - 151.872: 450 ############# 151.872 - 204.377: 246 ####### 204.377 - 274.917: 124 #### 274.917 - 369.684: 48 # 369.684 - 497.000: 11 | Approach proposed by this patch: Count: 5013 Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147 Percentiles: 90th: 49.000; 95th: 74.000; 99th: 115.000 10.000 - 20.339: 3160 ##################################################### 20.339 - 40.397: 1131 ################### 40.397 - 79.308: 507 ######### 79.308 - 154.794: 199 ### 154.794 - 301.232: 14 | 301.232 - 585.313: 1 | 585.313 - 8303.000: 1 | These samples were captured during a run of the btrfs tests 001, 002 and 004 in the xfstests, with a leaf/node size of 4Kb. Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> --- V2: Simplified code, removed unnecessary code. fs/btrfs/ctree.c | 44 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 42 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c index 5fa521b..a159270 100644 --- a/fs/btrfs/ctree.c +++ b/fs/btrfs/ctree.c @@ -2426,6 +2426,42 @@ done: return ret; } +static int key_search(struct extent_buffer *b, struct btrfs_key *key, + int level, int *prev_cmp, int *slot) +{ + char *kaddr = NULL; + unsigned long map_start = 0; + unsigned long map_len = 0; + unsigned long offset; + struct btrfs_disk_key *k = NULL; + struct btrfs_disk_key unaligned; + int err; + + if (*prev_cmp != 0) { + *prev_cmp = bin_search(b, key, level, slot); + return *prev_cmp; + } + + if (level == 0) + offset = offsetof(struct btrfs_leaf, items); + else + offset = offsetof(struct btrfs_node, ptrs); + + err = map_private_extent_buffer(b, offset, sizeof(struct btrfs_disk_key), + &kaddr, &map_start, &map_len); + if (!err) { + k = (struct btrfs_disk_key *)(kaddr + offset - map_start); + } else { + read_extent_buffer(b, &unaligned, offset, sizeof(unaligned)); + k = &unaligned; + } + + BUG_ON(comp_keys(k, key) != 0); + *slot = 0; + + return 0; +} + /* * look for key in the tree. path is filled in with nodes along the way * if key is found, we return zero and you can find the item in the leaf @@ -2454,6 +2490,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root int write_lock_level = 0; u8 lowest_level = 0; int min_write_lock_level; + int prev_cmp; lowest_level = p->lowest_level; WARN_ON(lowest_level && ins_len > 0); @@ -2484,6 +2521,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root min_write_lock_level = write_lock_level; again: + prev_cmp = -1; /* * we try very hard to do read locks on the root */ @@ -2584,7 +2622,7 @@ cow_done: if (!cow) btrfs_unlock_up_safe(p, level + 1); - ret = bin_search(b, key, level, &slot); + ret = key_search(b, key, level, &prev_cmp, &slot); if (level != 0) { int dec = 0; @@ -2719,6 +2757,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, int level; int lowest_unlock = 1; u8 lowest_level = 0; + int prev_cmp; lowest_level = p->lowest_level; WARN_ON(p->nodes[0] != NULL); @@ -2729,6 +2768,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, } again: + prev_cmp = -1; b = get_old_root(root, time_seq); level = btrfs_header_level(b); p->locks[level] = BTRFS_READ_LOCK; @@ -2746,7 +2786,7 @@ again: */ btrfs_unlock_up_safe(p, level + 1); - ret = bin_search(b, key, level, &slot); + ret = key_search(b, key, level, &prev_cmp, &slot); if (level != 0) { int dec = 0; -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Josef Bacik
2013-Aug-29 13:49 UTC
Re: [PATCH] Btrfs: optimize key searches in btrfs_search_slot
On Thu, Aug 29, 2013 at 01:44:13PM +0100, Filipe David Borba Manana wrote:> When the binary search returns 0 (exact match), the target key > will necessarily be at slot 0 of all nodes below the current one, > so in this case the binary search is not needed because it will > always return 0, and we waste time doing it, holding node locks > for longer than necessary, etc. > > Below follow histograms with the times spent on the current approach of > doing a binary search when the previous binary search returned 0, and > times for the new approach, which directly picks the first item/child > node in the leaf/node. > > Current approach: > > Count: 5013 > Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972 > Percentiles: 90th: 141.000; 95th: 182.000; 99th: 287.000 > 25.000 - 33.930: 211 ###### > 33.930 - 45.927: 277 ######## > 45.927 - 62.045: 1834 ##################################################### > 62.045 - 83.699: 1203 ################################### > 83.699 - 112.789: 609 ################## > 112.789 - 151.872: 450 ############# > 151.872 - 204.377: 246 ####### > 204.377 - 274.917: 124 #### > 274.917 - 369.684: 48 # > 369.684 - 497.000: 11 | > > Approach proposed by this patch: > > Count: 5013 > Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147 > Percentiles: 90th: 49.000; 95th: 74.000; 99th: 115.000 > 10.000 - 20.339: 3160 ##################################################### > 20.339 - 40.397: 1131 ################### > 40.397 - 79.308: 507 ######### > 79.308 - 154.794: 199 ### > 154.794 - 301.232: 14 | > 301.232 - 585.313: 1 | > 585.313 - 8303.000: 1 | > > These samples were captured during a run of the btrfs tests 001, 002 and > 004 in the xfstests, with a leaf/node size of 4Kb. > > Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> > --- > fs/btrfs/ctree.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++++++-- > 1 file changed, 59 insertions(+), 2 deletions(-) > > diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c > index 5fa521b..5b20eec 100644 > --- a/fs/btrfs/ctree.c > +++ b/fs/btrfs/ctree.c > @@ -2426,6 +2426,59 @@ done: > return ret; > } > > +static int key_search(struct extent_buffer *b, struct btrfs_key *key, > + int level, int *prev_cmp, int *slot) > +{ > + unsigned long eb_offset = 0; > + unsigned long len_left = b->len; > + char *kaddr = NULL; > + unsigned long map_start = 0; > + unsigned long map_len = 0; > + unsigned long offset; > + struct btrfs_disk_key *k = NULL; > + struct btrfs_disk_key unaligned; > + > + if (*prev_cmp != 0) { > + *prev_cmp = bin_search(b, key, level, slot); > + return *prev_cmp; > + } > + > + if (level == 0) > + offset = offsetof(struct btrfs_leaf, items); > + else > + offset = offsetof(struct btrfs_node, ptrs); > + > + /* > + * Map the entire extent buffer, otherwise callers can''t access > + * all keys/items of the leaf/node. Specially needed for case > + * where leaf/node size is greater than page cache size. > + */ > + while (len_left > 0) { > + unsigned long len = min(PAGE_CACHE_SIZE, len_left); > + int err; > + > + err = map_private_extent_buffer(b, eb_offset, len, &kaddr, > + &map_start, &map_len); > + len_left -= len; > + eb_offset += len; > + if (k) > + continue; > + if (!err) { > + k = (struct btrfs_disk_key *)(kaddr + offset - > + map_start); > + } else { > + read_extent_buffer(b, &unaligned, > + offset, sizeof(unaligned)); > + k = &unaligned; > + } > + } > +This confuses me, if we''re at slot 0 we should be at the front of the first page, no matter what, so why not just read the first key and carry on?> + BUG_ON(comp_keys(k, key) != 0);Please use the ASSERT() macro. Thanks, Josef -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Filipe David Manana
2013-Aug-29 13:53 UTC
Re: [PATCH] Btrfs: optimize key searches in btrfs_search_slot
On Thu, Aug 29, 2013 at 2:49 PM, Josef Bacik <jbacik@fusionio.com> wrote:> On Thu, Aug 29, 2013 at 01:44:13PM +0100, Filipe David Borba Manana wrote: >> When the binary search returns 0 (exact match), the target key >> will necessarily be at slot 0 of all nodes below the current one, >> so in this case the binary search is not needed because it will >> always return 0, and we waste time doing it, holding node locks >> for longer than necessary, etc. >> >> Below follow histograms with the times spent on the current approach of >> doing a binary search when the previous binary search returned 0, and >> times for the new approach, which directly picks the first item/child >> node in the leaf/node. >> >> Current approach: >> >> Count: 5013 >> Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972 >> Percentiles: 90th: 141.000; 95th: 182.000; 99th: 287.000 >> 25.000 - 33.930: 211 ###### >> 33.930 - 45.927: 277 ######## >> 45.927 - 62.045: 1834 ##################################################### >> 62.045 - 83.699: 1203 ################################### >> 83.699 - 112.789: 609 ################## >> 112.789 - 151.872: 450 ############# >> 151.872 - 204.377: 246 ####### >> 204.377 - 274.917: 124 #### >> 274.917 - 369.684: 48 # >> 369.684 - 497.000: 11 | >> >> Approach proposed by this patch: >> >> Count: 5013 >> Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147 >> Percentiles: 90th: 49.000; 95th: 74.000; 99th: 115.000 >> 10.000 - 20.339: 3160 ##################################################### >> 20.339 - 40.397: 1131 ################### >> 40.397 - 79.308: 507 ######### >> 79.308 - 154.794: 199 ### >> 154.794 - 301.232: 14 | >> 301.232 - 585.313: 1 | >> 585.313 - 8303.000: 1 | >> >> These samples were captured during a run of the btrfs tests 001, 002 and >> 004 in the xfstests, with a leaf/node size of 4Kb. >> >> Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> >> --- >> fs/btrfs/ctree.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++++++-- >> 1 file changed, 59 insertions(+), 2 deletions(-) >> >> diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c >> index 5fa521b..5b20eec 100644 >> --- a/fs/btrfs/ctree.c >> +++ b/fs/btrfs/ctree.c >> @@ -2426,6 +2426,59 @@ done: >> return ret; >> } >> >> +static int key_search(struct extent_buffer *b, struct btrfs_key *key, >> + int level, int *prev_cmp, int *slot) >> +{ >> + unsigned long eb_offset = 0; >> + unsigned long len_left = b->len; >> + char *kaddr = NULL; >> + unsigned long map_start = 0; >> + unsigned long map_len = 0; >> + unsigned long offset; >> + struct btrfs_disk_key *k = NULL; >> + struct btrfs_disk_key unaligned; >> + >> + if (*prev_cmp != 0) { >> + *prev_cmp = bin_search(b, key, level, slot); >> + return *prev_cmp; >> + } >> + >> + if (level == 0) >> + offset = offsetof(struct btrfs_leaf, items); >> + else >> + offset = offsetof(struct btrfs_node, ptrs); >> + >> + /* >> + * Map the entire extent buffer, otherwise callers can''t access >> + * all keys/items of the leaf/node. Specially needed for case >> + * where leaf/node size is greater than page cache size. >> + */ >> + while (len_left > 0) { >> + unsigned long len = min(PAGE_CACHE_SIZE, len_left); >> + int err; >> + >> + err = map_private_extent_buffer(b, eb_offset, len, &kaddr, >> + &map_start, &map_len); >> + len_left -= len; >> + eb_offset += len; >> + if (k) >> + continue; >> + if (!err) { >> + k = (struct btrfs_disk_key *)(kaddr + offset - >> + map_start); >> + } else { >> + read_extent_buffer(b, &unaligned, >> + offset, sizeof(unaligned)); >> + k = &unaligned; >> + } >> + } >> + > > This confuses me, if we''re at slot 0 we should be at the front of the first > page, no matter what, so why not just read the first key and carry on?Correct. Mistake of mine, corrected in the second patch version. I was having NULL pointer dereferences in read_extent_buffer when the leaf/node sizes were bigger than page cache size. Turned out to be a mistake from me, and no need to do the whole mapping on page size units.> >> + BUG_ON(comp_keys(k, key) != 0); > > Please use the ASSERT() macro. Thanks,Ok, updating it. Thanks Josef.> > Josef-- Filipe David Manana, "Reasonable men adapt themselves to the world. Unreasonable men adapt the world to themselves. That''s why all progress depends on unreasonable men." -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Filipe David Borba Manana
2013-Aug-29 13:59 UTC
[PATCH v3] Btrfs: optimize key searches in btrfs_search_slot
When the binary search returns 0 (exact match), the target key will necessarily be at slot 0 of all nodes below the current one, so in this case the binary search is not needed because it will always return 0, and we waste time doing it, holding node locks for longer than necessary, etc. Below follow histograms with the times spent on the current approach of doing a binary search when the previous binary search returned 0, and times for the new approach, which directly picks the first item/child node in the leaf/node. Current approach: Count: 5013 Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972 Percentiles: 90th: 141.000; 95th: 182.000; 99th: 287.000 25.000 - 33.930: 211 ###### 33.930 - 45.927: 277 ######## 45.927 - 62.045: 1834 ##################################################### 62.045 - 83.699: 1203 ################################### 83.699 - 112.789: 609 ################## 112.789 - 151.872: 450 ############# 151.872 - 204.377: 246 ####### 204.377 - 274.917: 124 #### 274.917 - 369.684: 48 # 369.684 - 497.000: 11 | Approach proposed by this patch: Count: 5013 Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147 Percentiles: 90th: 49.000; 95th: 74.000; 99th: 115.000 10.000 - 20.339: 3160 ##################################################### 20.339 - 40.397: 1131 ################### 40.397 - 79.308: 507 ######### 79.308 - 154.794: 199 ### 154.794 - 301.232: 14 | 301.232 - 585.313: 1 | 585.313 - 8303.000: 1 | These samples were captured during a run of the btrfs tests 001, 002 and 004 in the xfstests, with a leaf/node size of 4Kb. Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> --- V2: Simplified code, removed unnecessary code. V3: Replaced BUG_ON() with the new ASSERT() from Josef. fs/btrfs/ctree.c | 44 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 42 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c index 5fa521b..b69dd46 100644 --- a/fs/btrfs/ctree.c +++ b/fs/btrfs/ctree.c @@ -2426,6 +2426,42 @@ done: return ret; } +static int key_search(struct extent_buffer *b, struct btrfs_key *key, + int level, int *prev_cmp, int *slot) +{ + char *kaddr = NULL; + unsigned long map_start = 0; + unsigned long map_len = 0; + unsigned long offset; + struct btrfs_disk_key *k = NULL; + struct btrfs_disk_key unaligned; + int err; + + if (*prev_cmp != 0) { + *prev_cmp = bin_search(b, key, level, slot); + return *prev_cmp; + } + + if (level == 0) + offset = offsetof(struct btrfs_leaf, items); + else + offset = offsetof(struct btrfs_node, ptrs); + + err = map_private_extent_buffer(b, offset, sizeof(struct btrfs_disk_key), + &kaddr, &map_start, &map_len); + if (!err) { + k = (struct btrfs_disk_key *)(kaddr + offset - map_start); + } else { + read_extent_buffer(b, &unaligned, offset, sizeof(unaligned)); + k = &unaligned; + } + + ASSERT(comp_keys(k, key) == 0); + *slot = 0; + + return 0; +} + /* * look for key in the tree. path is filled in with nodes along the way * if key is found, we return zero and you can find the item in the leaf @@ -2454,6 +2490,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root int write_lock_level = 0; u8 lowest_level = 0; int min_write_lock_level; + int prev_cmp; lowest_level = p->lowest_level; WARN_ON(lowest_level && ins_len > 0); @@ -2484,6 +2521,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root min_write_lock_level = write_lock_level; again: + prev_cmp = -1; /* * we try very hard to do read locks on the root */ @@ -2584,7 +2622,7 @@ cow_done: if (!cow) btrfs_unlock_up_safe(p, level + 1); - ret = bin_search(b, key, level, &slot); + ret = key_search(b, key, level, &prev_cmp, &slot); if (level != 0) { int dec = 0; @@ -2719,6 +2757,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, int level; int lowest_unlock = 1; u8 lowest_level = 0; + int prev_cmp; lowest_level = p->lowest_level; WARN_ON(p->nodes[0] != NULL); @@ -2729,6 +2768,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, } again: + prev_cmp = -1; b = get_old_root(root, time_seq); level = btrfs_header_level(b); p->locks[level] = BTRFS_READ_LOCK; @@ -2746,7 +2786,7 @@ again: */ btrfs_unlock_up_safe(p, level + 1); - ret = bin_search(b, key, level, &slot); + ret = key_search(b, key, level, &prev_cmp, &slot); if (level != 0) { int dec = 0; -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Zach Brown
2013-Aug-29 18:08 UTC
Re: [PATCH v3] Btrfs: optimize key searches in btrfs_search_slot
On Thu, Aug 29, 2013 at 02:59:26PM +0100, Filipe David Borba Manana wrote:> When the binary search returns 0 (exact match), the target key > will necessarily be at slot 0 of all nodes below the current one, > so in this case the binary search is not needed because it will > always return 0, and we waste time doing it, holding node locks > for longer than necessary, etc. > > Below follow histograms with the times spent on the current approach of > doing a binary search when the previous binary search returned 0, and > times for the new approach, which directly picks the first item/child > node in the leaf/node. > > Count: 5013 > Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972 > Percentiles: 90th: 141.000; 95th: 182.000; 99th: 287.000> Count: 5013 > Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147 > Percentiles: 90th: 49.000; 95th: 74.000; 99th: 115.000Where''d the giant increase in the range max come from? Just jittery measurement? Maybe get a lot more data points to smooth that out?> +static int key_search(struct extent_buffer *b, struct btrfs_key *key, > + int level, int *prev_cmp, int *slot) > +{ > + char *kaddr = NULL; > + unsigned long map_start = 0; > + unsigned long map_len = 0; > + unsigned long offset; > + struct btrfs_disk_key *k = NULL; > + struct btrfs_disk_key unaligned; > + int err; > + > + if (*prev_cmp != 0) { > + *prev_cmp = bin_search(b, key, level, slot); > + return *prev_cmp; > + }> + *slot = 0; > + > + return 0;So this is the actual work done by the function.> + > + if (level == 0) > + offset = offsetof(struct btrfs_leaf, items); > + else > + offset = offsetof(struct btrfs_node, ptrs);(+10 fragility points for assuming that the key starts each struct instead of using [0].key)> + > + err = map_private_extent_buffer(b, offset, sizeof(struct btrfs_disk_key), > + &kaddr, &map_start, &map_len); > + if (!err) { > + k = (struct btrfs_disk_key *)(kaddr + offset - map_start); > + } else { > + read_extent_buffer(b, &unaligned, offset, sizeof(unaligned)); > + k = &unaligned; > + } > + > + ASSERT(comp_keys(k, key) == 0);All of the rest of the function, including most of the local variables, is overhead for that assertion. We don''t actually care about the relative sorted key position of the two keys so we don''t need smart field-aware comparisions. We can use a dumb memcmp. We can replace all that stuff with two easy memcmp_extent_buffers() which vanish if ASSERT is a nop. if (level) ASSERT(!memcmp_extent_buffer(b, key, offsetof(struct btrfs_node, ptrs[0].key), sizeof(*key))); else ASSERT(!memcmp_extent_buffer(b, key, offsetof(struct btrfs_leaf, items[0].key), sizeof(*key))); Right? - z -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Josef Bacik
2013-Aug-29 18:35 UTC
Re: [PATCH v3] Btrfs: optimize key searches in btrfs_search_slot
On Thu, Aug 29, 2013 at 11:08:16AM -0700, Zach Brown wrote:> On Thu, Aug 29, 2013 at 02:59:26PM +0100, Filipe David Borba Manana wrote: > > When the binary search returns 0 (exact match), the target key > > will necessarily be at slot 0 of all nodes below the current one, > > so in this case the binary search is not needed because it will > > always return 0, and we waste time doing it, holding node locks > > for longer than necessary, etc. > > > > Below follow histograms with the times spent on the current approach of > > doing a binary search when the previous binary search returned 0, and > > times for the new approach, which directly picks the first item/child > > node in the leaf/node. > > > > Count: 5013 > > Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972 > > Percentiles: 90th: 141.000; 95th: 182.000; 99th: 287.000 > > > Count: 5013 > > Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147 > > Percentiles: 90th: 49.000; 95th: 74.000; 99th: 115.000 > > Where''d the giant increase in the range max come from? Just jittery > measurement? Maybe get a lot more data points to smooth that out? > > > +static int key_search(struct extent_buffer *b, struct btrfs_key *key, > > + int level, int *prev_cmp, int *slot) > > +{ > > + char *kaddr = NULL; > > + unsigned long map_start = 0; > > + unsigned long map_len = 0; > > + unsigned long offset; > > + struct btrfs_disk_key *k = NULL; > > + struct btrfs_disk_key unaligned; > > + int err; > > + > > + if (*prev_cmp != 0) { > > + *prev_cmp = bin_search(b, key, level, slot); > > + return *prev_cmp; > > + } > > > > + *slot = 0; > > + > > + return 0; > > So this is the actual work done by the function. > > > + > > + if (level == 0) > > + offset = offsetof(struct btrfs_leaf, items); > > + else > > + offset = offsetof(struct btrfs_node, ptrs); > > (+10 fragility points for assuming that the key starts each struct > instead of using [0].key) > > > + > > + err = map_private_extent_buffer(b, offset, sizeof(struct btrfs_disk_key), > > + &kaddr, &map_start, &map_len); > > + if (!err) { > > + k = (struct btrfs_disk_key *)(kaddr + offset - map_start); > > + } else { > > + read_extent_buffer(b, &unaligned, offset, sizeof(unaligned)); > > + k = &unaligned; > > + } > > + > > + ASSERT(comp_keys(k, key) == 0); > > All of the rest of the function, including most of the local variables, > is overhead for that assertion. We don''t actually care about the > relative sorted key position of the two keys so we don''t need smart > field-aware comparisions. We can use a dumb memcmp. > > We can replace all that stuff with two easy memcmp_extent_buffers() > which vanish if ASSERT is a nop. >Actually we can''t since we have a cpu key and the keys in the eb are disk keys. So maybe keep what we have here and wrap it completely in CONFIG_BTRFS_ASSERT? Josef -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Filipe David Manana
2013-Aug-29 18:41 UTC
Re: [PATCH v3] Btrfs: optimize key searches in btrfs_search_slot
On Thu, Aug 29, 2013 at 7:08 PM, Zach Brown <zab@redhat.com> wrote:> On Thu, Aug 29, 2013 at 02:59:26PM +0100, Filipe David Borba Manana wrote: >> When the binary search returns 0 (exact match), the target key >> will necessarily be at slot 0 of all nodes below the current one, >> so in this case the binary search is not needed because it will >> always return 0, and we waste time doing it, holding node locks >> for longer than necessary, etc. >> >> Below follow histograms with the times spent on the current approach of >> doing a binary search when the previous binary search returned 0, and >> times for the new approach, which directly picks the first item/child >> node in the leaf/node. >> >> Count: 5013 >> Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972 >> Percentiles: 90th: 141.000; 95th: 182.000; 99th: 287.000 > >> Count: 5013 >> Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147 >> Percentiles: 90th: 49.000; 95th: 74.000; 99th: 115.000 > > Where''d the giant increase in the range max come from? Just jittery > measurement? Maybe get a lot more data points to smooth that out?Correct, just jittery.> >> +static int key_search(struct extent_buffer *b, struct btrfs_key *key, >> + int level, int *prev_cmp, int *slot) >> +{ >> + char *kaddr = NULL; >> + unsigned long map_start = 0; >> + unsigned long map_len = 0; >> + unsigned long offset; >> + struct btrfs_disk_key *k = NULL; >> + struct btrfs_disk_key unaligned; >> + int err; >> + >> + if (*prev_cmp != 0) { >> + *prev_cmp = bin_search(b, key, level, slot); >> + return *prev_cmp; >> + } > > >> + *slot = 0; >> + >> + return 0; > > So this is the actual work done by the function.Correct. That and the very first if statement in the function.> >> + >> + if (level == 0) >> + offset = offsetof(struct btrfs_leaf, items); >> + else >> + offset = offsetof(struct btrfs_node, ptrs); > > (+10 fragility points for assuming that the key starts each struct > instead of using [0].key)Ok. I just copied that from ctree.c:bin_search(). I guess that gives another +10 fragility points. Thanks for pointing out.> >> + >> + err = map_private_extent_buffer(b, offset, sizeof(struct btrfs_disk_key), >> + &kaddr, &map_start, &map_len); >> + if (!err) { >> + k = (struct btrfs_disk_key *)(kaddr + offset - map_start); >> + } else { >> + read_extent_buffer(b, &unaligned, offset, sizeof(unaligned)); >> + k = &unaligned; >> + } >> + >> + ASSERT(comp_keys(k, key) == 0); > > All of the rest of the function, including most of the local variables, > is overhead for that assertion. We don''t actually care about the > relative sorted key position of the two keys so we don''t need smart > field-aware comparisions. We can use a dumb memcmp. > > We can replace all that stuff with two easy memcmp_extent_buffers() > which vanish if ASSERT is a nop. > > if (level) > ASSERT(!memcmp_extent_buffer(b, key, > offsetof(struct btrfs_node, ptrs[0].key), > sizeof(*key))); > else > ASSERT(!memcmp_extent_buffer(b, key, > offsetof(struct btrfs_leaf, items[0].key), > sizeof(*key))); > > Right?No, and as Josef just pointed, like that we compare a btrfs_key with a btrfs_disk_key, which is wrong due to endianess differences. So I''ll go for Josef''s suggestion in the following mail about wrapping stuff with a CONFIG_BTRFS_ASSERT #ifdef macro.> > - z-- Filipe David Manana, "Reasonable men adapt themselves to the world. Unreasonable men adapt the world to themselves. That''s why all progress depends on unreasonable men." -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Zach Brown
2013-Aug-29 19:00 UTC
Re: [PATCH v3] Btrfs: optimize key searches in btrfs_search_slot
> > We can replace all that stuff with two easy memcmp_extent_buffers() > > which vanish if ASSERT is a nop. > > Actually we can''t since we have a cpu key and the keys in the eb are disk keys. > So maybe keep what we have here and wrap it completely in CONFIG_BTRFS_ASSERT?I could have sworn that I checked that the input was a disk key. In that case, then, I''d put all this off in a helper function that''s called in the asserts that swabs to a disk key and then does the memcmp. All this fiddly assert junk (which just compares keys!) doesn''t belong implemented by hand in this trivial helper. - z -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Zach Brown
2013-Aug-29 19:02 UTC
Re: [PATCH v3] Btrfs: optimize key searches in btrfs_search_slot
> >> + if (level == 0) > >> + offset = offsetof(struct btrfs_leaf, items); > >> + else > >> + offset = offsetof(struct btrfs_node, ptrs); > > > > (+10 fragility points for assuming that the key starts each struct > > instead of using [0].key) > > Ok. I just copied that from ctree.c:bin_search(). I guess that gives > another +10 fragility points. > Thanks for pointing out.Yeah. Don''t worry, you have quite a way to go before building up personal fragility points that come anywhere near the wealth of fragility points that btrfs has in the bank :). - z -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Filipe David Borba Manana
2013-Aug-29 19:21 UTC
[PATCH v4] Btrfs: optimize key searches in btrfs_search_slot
When the binary search returns 0 (exact match), the target key will necessarily be at slot 0 of all nodes below the current one, so in this case the binary search is not needed because it will always return 0, and we waste time doing it, holding node locks for longer than necessary, etc. Below follow histograms with the times spent on the current approach of doing a binary search when the previous binary search returned 0, and times for the new approach, which directly picks the first item/child node in the leaf/node. Current approach: Count: 5013 Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972 Percentiles: 90th: 141.000; 95th: 182.000; 99th: 287.000 25.000 - 33.930: 211 ###### 33.930 - 45.927: 277 ######## 45.927 - 62.045: 1834 ##################################################### 62.045 - 83.699: 1203 ################################### 83.699 - 112.789: 609 ################## 112.789 - 151.872: 450 ############# 151.872 - 204.377: 246 ####### 204.377 - 274.917: 124 #### 274.917 - 369.684: 48 # 369.684 - 497.000: 11 | Approach proposed by this patch: Count: 5013 Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147 Percentiles: 90th: 49.000; 95th: 74.000; 99th: 115.000 10.000 - 20.339: 3160 ##################################################### 20.339 - 40.397: 1131 ################### 40.397 - 79.308: 507 ######### 79.308 - 154.794: 199 ### 154.794 - 301.232: 14 | 301.232 - 585.313: 1 | 585.313 - 8303.000: 1 | These samples were captured during a run of the btrfs tests 001, 002 and 004 in the xfstests, with a leaf/node size of 4Kb. Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> --- V2: Simplified code, removed unnecessary code. V3: Replaced BUG_ON() with the new ASSERT() from Josef. V4: Addressed latest comments from Zach Brown and Josef Bacik. Surrounded all code that is used for the assertion with a #ifdef CONFIG_BTRFS_ASSERT ... #endif block. Also changed offset arguments to be more strictly correct. fs/btrfs/ctree.c | 43 +++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 41 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c index 5fa521b..a48cbb2 100644 --- a/fs/btrfs/ctree.c +++ b/fs/btrfs/ctree.c @@ -2426,6 +2426,41 @@ done: return ret; } +static void key_search_validate(struct extent_buffer *b, + struct btrfs_key *key, + int level) +{ +#ifdef CONFIG_BTRFS_ASSERT + struct btrfs_disk_key disk_key; + + btrfs_cpu_key_to_disk(&disk_key, key); + + if (level == 0) + ASSERT(!memcmp_extent_buffer(b, &disk_key, + offsetof(struct btrfs_leaf, items[0].key), + sizeof(disk_key))); + else + ASSERT(!memcmp_extent_buffer(b, &disk_key, + offsetof(struct btrfs_node, ptrs[0].key), + sizeof(disk_key))); +#endif +} + +static int key_search(struct extent_buffer *b, struct btrfs_key *key, + int level, int *prev_cmp, int *slot) +{ + + if (*prev_cmp != 0) { + *prev_cmp = bin_search(b, key, level, slot); + return *prev_cmp; + } + + key_search_validate(b, key, level); + *slot = 0; + + return 0; +} + /* * look for key in the tree. path is filled in with nodes along the way * if key is found, we return zero and you can find the item in the leaf @@ -2454,6 +2489,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root int write_lock_level = 0; u8 lowest_level = 0; int min_write_lock_level; + int prev_cmp; lowest_level = p->lowest_level; WARN_ON(lowest_level && ins_len > 0); @@ -2484,6 +2520,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root min_write_lock_level = write_lock_level; again: + prev_cmp = -1; /* * we try very hard to do read locks on the root */ @@ -2584,7 +2621,7 @@ cow_done: if (!cow) btrfs_unlock_up_safe(p, level + 1); - ret = bin_search(b, key, level, &slot); + ret = key_search(b, key, level, &prev_cmp, &slot); if (level != 0) { int dec = 0; @@ -2719,6 +2756,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, int level; int lowest_unlock = 1; u8 lowest_level = 0; + int prev_cmp; lowest_level = p->lowest_level; WARN_ON(p->nodes[0] != NULL); @@ -2729,6 +2767,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, } again: + prev_cmp = -1; b = get_old_root(root, time_seq); level = btrfs_header_level(b); p->locks[level] = BTRFS_READ_LOCK; @@ -2746,7 +2785,7 @@ again: */ btrfs_unlock_up_safe(p, level + 1); - ret = bin_search(b, key, level, &slot); + ret = key_search(b, key, level, &prev_cmp, &slot); if (level != 0) { int dec = 0; -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
David Sterba
2013-Aug-30 14:14 UTC
Re: [PATCH v4] Btrfs: optimize key searches in btrfs_search_slot
On Thu, Aug 29, 2013 at 08:21:51PM +0100, Filipe David Borba Manana wrote:> Count: 5013 > Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972 > Percentiles: 90th: 141.000; 95th: 182.000; 99th: 287.000 > 25.000 - 33.930: 211 ###### > 33.930 - 45.927: 277 ######## > 45.927 - 62.045: 1834 ##################################################### > 62.045 - 83.699: 1203 ################################### > 83.699 - 112.789: 609 ################## > 112.789 - 151.872: 450 ############# > 151.872 - 204.377: 246 ####### > 204.377 - 274.917: 124 #### > 274.917 - 369.684: 48 # > 369.684 - 497.000: 11 | > > Approach proposed by this patch: > > Count: 5013 > Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147 > Percentiles: 90th: 49.000; 95th: 74.000; 99th: 115.000 > 10.000 - 20.339: 3160 ##################################################### > 20.339 - 40.397: 1131 ################### > 40.397 - 79.308: 507 ######### > 79.308 - 154.794: 199 ### > 154.794 - 301.232: 14 | > 301.232 - 585.313: 1 | > 585.313 - 8303.000: 1 |The statistics do not change from patch to patch+1 but you''re doing changes that may affect performance, can you please update them as well? thanks, david -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Filipe David Borba Manana
2013-Aug-30 14:46 UTC
[PATCH v5] Btrfs: optimize key searches in btrfs_search_slot
When the binary search returns 0 (exact match), the target key will necessarily be at slot 0 of all nodes below the current one, so in this case the binary search is not needed because it will always return 0, and we waste time doing it, holding node locks for longer than necessary, etc. Below follow histograms with the times spent on the current approach of doing a binary search when the previous binary search returned 0, and times for the new approach, which directly picks the first item/child node in the leaf/node. Current approach: Count: 6682 Range: 35.000 - 8370.000; Mean: 85.837; Median: 75.000; Stddev: 106.429 Percentiles: 90th: 124.000; 95th: 145.000; 99th: 206.000 35.000 - 61.080: 1235 ################ 61.080 - 106.053: 4207 ##################################################### 106.053 - 183.606: 1122 ############## 183.606 - 317.341: 111 # 317.341 - 547.959: 6 | 547.959 - 8370.000: 1 | Approach proposed by this patch: Count: 6682 Range: 6.000 - 135.000; Mean: 16.690; Median: 16.000; Stddev: 7.160 Percentiles: 90th: 23.000; 95th: 27.000; 99th: 40.000 6.000 - 8.418: 58 # 8.418 - 11.670: 1149 ######################### 11.670 - 16.046: 2418 ##################################################### 16.046 - 21.934: 2098 ############################################## 21.934 - 29.854: 744 ################ 29.854 - 40.511: 154 ### 40.511 - 54.848: 41 # 54.848 - 74.136: 5 | 74.136 - 100.087: 9 | 100.087 - 135.000: 6 | These samples were captured during a run of the btrfs tests 001, 002 and 004 in the xfstests, with a leaf/node size of 4Kb. Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> --- V2: Simplified code, removed unnecessary code. V3: Replaced BUG_ON() with the new ASSERT() from Josef. V4: Addressed latest comments from Zach Brown and Josef Bacik. Surrounded all code that is used for the assertion with a #ifdef CONFIG_BTRFS_ASSERT ... #endif block. Also changed offset arguments to be more strictly correct. V5: Updated histograms to reflect latest version of the code. fs/btrfs/ctree.c | 42 ++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 40 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c index 5fa521b..6434672 100644 --- a/fs/btrfs/ctree.c +++ b/fs/btrfs/ctree.c @@ -2426,6 +2426,40 @@ done: return ret; } +static void key_search_validate(struct extent_buffer *b, + struct btrfs_key *key, + int level) +{ +#ifdef CONFIG_BTRFS_ASSERT + struct btrfs_disk_key disk_key; + + btrfs_cpu_key_to_disk(&disk_key, key); + + if (level == 0) + ASSERT(!memcmp_extent_buffer(b, &disk_key, + offsetof(struct btrfs_leaf, items[0].key), + sizeof(disk_key))); + else + ASSERT(!memcmp_extent_buffer(b, &disk_key, + offsetof(struct btrfs_node, ptrs[0].key), + sizeof(disk_key))); +#endif +} + +static int key_search(struct extent_buffer *b, struct btrfs_key *key, + int level, int *prev_cmp, int *slot) +{ + if (*prev_cmp != 0) { + *prev_cmp = bin_search(b, key, level, slot); + return *prev_cmp; + } + + key_search_validate(b, key, level); + *slot = 0; + + return 0; +} + /* * look for key in the tree. path is filled in with nodes along the way * if key is found, we return zero and you can find the item in the leaf @@ -2454,6 +2488,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root int write_lock_level = 0; u8 lowest_level = 0; int min_write_lock_level; + int prev_cmp; lowest_level = p->lowest_level; WARN_ON(lowest_level && ins_len > 0); @@ -2484,6 +2519,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root min_write_lock_level = write_lock_level; again: + prev_cmp = -1; /* * we try very hard to do read locks on the root */ @@ -2584,7 +2620,7 @@ cow_done: if (!cow) btrfs_unlock_up_safe(p, level + 1); - ret = bin_search(b, key, level, &slot); + ret = key_search(b, key, level, &prev_cmp, &slot); if (level != 0) { int dec = 0; @@ -2719,6 +2755,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, int level; int lowest_unlock = 1; u8 lowest_level = 0; + int prev_cmp; lowest_level = p->lowest_level; WARN_ON(p->nodes[0] != NULL); @@ -2729,6 +2766,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, } again: + prev_cmp = -1; b = get_old_root(root, time_seq); level = btrfs_header_level(b); p->locks[level] = BTRFS_READ_LOCK; @@ -2746,7 +2784,7 @@ again: */ btrfs_unlock_up_safe(p, level + 1); - ret = bin_search(b, key, level, &slot); + ret = key_search(b, key, level, &prev_cmp, &slot); if (level != 0) { int dec = 0; -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Filipe David Manana
2013-Aug-30 14:47 UTC
Re: [PATCH v4] Btrfs: optimize key searches in btrfs_search_slot
On Fri, Aug 30, 2013 at 3:14 PM, David Sterba <dsterba@suse.cz> wrote:> On Thu, Aug 29, 2013 at 08:21:51PM +0100, Filipe David Borba Manana wrote: >> Count: 5013 >> Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972 >> Percentiles: 90th: 141.000; 95th: 182.000; 99th: 287.000 >> 25.000 - 33.930: 211 ###### >> 33.930 - 45.927: 277 ######## >> 45.927 - 62.045: 1834 ##################################################### >> 62.045 - 83.699: 1203 ################################### >> 83.699 - 112.789: 609 ################## >> 112.789 - 151.872: 450 ############# >> 151.872 - 204.377: 246 ####### >> 204.377 - 274.917: 124 #### >> 274.917 - 369.684: 48 # >> 369.684 - 497.000: 11 | >> >> Approach proposed by this patch: >> >> Count: 5013 >> Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147 >> Percentiles: 90th: 49.000; 95th: 74.000; 99th: 115.000 >> 10.000 - 20.339: 3160 ##################################################### >> 20.339 - 40.397: 1131 ################### >> 40.397 - 79.308: 507 ######### >> 79.308 - 154.794: 199 ### >> 154.794 - 301.232: 14 | >> 301.232 - 585.313: 1 | >> 585.313 - 8303.000: 1 | > > The statistics do not change from patch to patch+1 but you''re doing > changes that may affect performance, can you please update them as well?Sure. They''re actually better now :) Patch following with updated histograms.> > thanks, > david-- Filipe David Manana, "Reasonable men adapt themselves to the world. Unreasonable men adapt the world to themselves. That''s why all progress depends on unreasonable men." -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
David Sterba
2013-Aug-30 14:59 UTC
Re: [PATCH v4] Btrfs: optimize key searches in btrfs_search_slot
On Fri, Aug 30, 2013 at 03:47:21PM +0100, Filipe David Manana wrote:> Sure. > They''re actually better now :)Thanks. The numbers changed in both samples, but the relative difference is still 2x improvement in this particular test. david -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Filipe David Manana
2013-Aug-30 15:10 UTC
Re: [PATCH v4] Btrfs: optimize key searches in btrfs_search_slot
On Fri, Aug 30, 2013 at 3:59 PM, David Sterba <dsterba@suse.cz> wrote:> On Fri, Aug 30, 2013 at 03:47:21PM +0100, Filipe David Manana wrote: >> Sure. >> They''re actually better now :) > > Thanks. The numbers changed in both samples, but the relative difference > is still 2x improvement in this particular test.I tend to favor the percentiles above everything else, and for these last comparison, they''re all about 5x better. These times are for a single search node/leaf. The highest the level (where root is highest) an exact match first happens, the better it will be for the overall tree search operation, as more times this optimal code path will be executed.> > david-- Filipe David Manana, "Reasonable men adapt themselves to the world. Unreasonable men adapt the world to themselves. That''s why all progress depends on unreasonable men." -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Miao Xie
2013-Aug-31 11:08 UTC
Re: [PATCH v5] Btrfs: optimize key searches in btrfs_search_slot
On fri, 30 Aug 2013 15:46:43 +0100, Filipe David Borba Manana wrote:> When the binary search returns 0 (exact match), the target key > will necessarily be at slot 0 of all nodes below the current one, > so in this case the binary search is not needed because it will > always return 0, and we waste time doing it, holding node locks > for longer than necessary, etc. > > Below follow histograms with the times spent on the current approach of > doing a binary search when the previous binary search returned 0, and > times for the new approach, which directly picks the first item/child > node in the leaf/node. > > Current approach: > > Count: 6682 > Range: 35.000 - 8370.000; Mean: 85.837; Median: 75.000; Stddev: 106.429 > Percentiles: 90th: 124.000; 95th: 145.000; 99th: 206.000 > 35.000 - 61.080: 1235 ################ > 61.080 - 106.053: 4207 ##################################################### > 106.053 - 183.606: 1122 ############## > 183.606 - 317.341: 111 # > 317.341 - 547.959: 6 | > 547.959 - 8370.000: 1 | > > Approach proposed by this patch: > > Count: 6682 > Range: 6.000 - 135.000; Mean: 16.690; Median: 16.000; Stddev: 7.160 > Percentiles: 90th: 23.000; 95th: 27.000; 99th: 40.000 > 6.000 - 8.418: 58 # > 8.418 - 11.670: 1149 ######################### > 11.670 - 16.046: 2418 ##################################################### > 16.046 - 21.934: 2098 ############################################## > 21.934 - 29.854: 744 ################ > 29.854 - 40.511: 154 ### > 40.511 - 54.848: 41 # > 54.848 - 74.136: 5 | > 74.136 - 100.087: 9 | > 100.087 - 135.000: 6 | > > These samples were captured during a run of the btrfs tests 001, 002 and > 004 in the xfstests, with a leaf/node size of 4Kb. > > Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> > --- > > V2: Simplified code, removed unnecessary code. > V3: Replaced BUG_ON() with the new ASSERT() from Josef. > V4: Addressed latest comments from Zach Brown and Josef Bacik. > Surrounded all code that is used for the assertion with a > #ifdef CONFIG_BTRFS_ASSERT ... #endif block. Also changed > offset arguments to be more strictly correct. > V5: Updated histograms to reflect latest version of the code. > > fs/btrfs/ctree.c | 42 ++++++++++++++++++++++++++++++++++++++++-- > 1 file changed, 40 insertions(+), 2 deletions(-) > > diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c > index 5fa521b..6434672 100644 > --- a/fs/btrfs/ctree.c > +++ b/fs/btrfs/ctree.c > @@ -2426,6 +2426,40 @@ done: > return ret; > } > > +static void key_search_validate(struct extent_buffer *b, > + struct btrfs_key *key, > + int level) > +{ > +#ifdef CONFIG_BTRFS_ASSERT > + struct btrfs_disk_key disk_key; > + > + btrfs_cpu_key_to_disk(&disk_key, key); > + > + if (level == 0) > + ASSERT(!memcmp_extent_buffer(b, &disk_key, > + offsetof(struct btrfs_leaf, items[0].key), > + sizeof(disk_key))); > + else > + ASSERT(!memcmp_extent_buffer(b, &disk_key, > + offsetof(struct btrfs_node, ptrs[0].key), > + sizeof(disk_key))); > +#endif > +}I think it is better to move #ifdef out of key_search_validate(), and make the function return the check result, then> + > +static int key_search(struct extent_buffer *b, struct btrfs_key *key, > + int level, int *prev_cmp, int *slot) > +{ > + if (*prev_cmp != 0) { > + *prev_cmp = bin_search(b, key, level, slot); > + return *prev_cmp; > + } > + > + key_search_validate(b, key, level);ASSERT(key_search_validate(b, key, level)); it can make the compiler happen when CONFIG_BTRFS_ASSERT is not set. Thanks Miao> + *slot = 0; > + > + return 0; > +} > + > /* > * look for key in the tree. path is filled in with nodes along the way > * if key is found, we return zero and you can find the item in the leaf > @@ -2454,6 +2488,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root > int write_lock_level = 0; > u8 lowest_level = 0; > int min_write_lock_level; > + int prev_cmp; > > lowest_level = p->lowest_level; > WARN_ON(lowest_level && ins_len > 0); > @@ -2484,6 +2519,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root > min_write_lock_level = write_lock_level; > > again: > + prev_cmp = -1; > /* > * we try very hard to do read locks on the root > */ > @@ -2584,7 +2620,7 @@ cow_done: > if (!cow) > btrfs_unlock_up_safe(p, level + 1); > > - ret = bin_search(b, key, level, &slot); > + ret = key_search(b, key, level, &prev_cmp, &slot); > > if (level != 0) { > int dec = 0; > @@ -2719,6 +2755,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, > int level; > int lowest_unlock = 1; > u8 lowest_level = 0; > + int prev_cmp; > > lowest_level = p->lowest_level; > WARN_ON(p->nodes[0] != NULL); > @@ -2729,6 +2766,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, > } > > again: > + prev_cmp = -1; > b = get_old_root(root, time_seq); > level = btrfs_header_level(b); > p->locks[level] = BTRFS_READ_LOCK; > @@ -2746,7 +2784,7 @@ again: > */ > btrfs_unlock_up_safe(p, level + 1); > > - ret = bin_search(b, key, level, &slot); > + ret = key_search(b, key, level, &prev_cmp, &slot); > > if (level != 0) { > int dec = 0; >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Filipe David Borba Manana
2013-Aug-31 12:54 UTC
[PATCH v6] Btrfs: optimize key searches in btrfs_search_slot
When the binary search returns 0 (exact match), the target key will necessarily be at slot 0 of all nodes below the current one, so in this case the binary search is not needed because it will always return 0, and we waste time doing it, holding node locks for longer than necessary, etc. Below follow histograms with the times spent on the current approach of doing a binary search when the previous binary search returned 0, and times for the new approach, which directly picks the first item/child node in the leaf/node. Current approach: Count: 6682 Range: 35.000 - 8370.000; Mean: 85.837; Median: 75.000; Stddev: 106.429 Percentiles: 90th: 124.000; 95th: 145.000; 99th: 206.000 35.000 - 61.080: 1235 ################ 61.080 - 106.053: 4207 ##################################################### 106.053 - 183.606: 1122 ############## 183.606 - 317.341: 111 # 317.341 - 547.959: 6 | 547.959 - 8370.000: 1 | Approach proposed by this patch: Count: 6682 Range: 6.000 - 135.000; Mean: 16.690; Median: 16.000; Stddev: 7.160 Percentiles: 90th: 23.000; 95th: 27.000; 99th: 40.000 6.000 - 8.418: 58 # 8.418 - 11.670: 1149 ######################### 11.670 - 16.046: 2418 ##################################################### 16.046 - 21.934: 2098 ############################################## 21.934 - 29.854: 744 ################ 29.854 - 40.511: 154 ### 40.511 - 54.848: 41 # 54.848 - 74.136: 5 | 74.136 - 100.087: 9 | 100.087 - 135.000: 6 | These samples were captured during a run of the btrfs tests 001, 002 and 004 in the xfstests, with a leaf/node size of 4Kb. Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> Signed-off-by: Josef Bacik <jbacik@fusionio.com> --- V2: Simplified code, removed unnecessary code. V3: Replaced BUG_ON() with the new ASSERT() from Josef. V4: Addressed latest comments from Zach Brown and Josef Bacik. Surrounded all code that is used for the assertion with a #ifdef CONFIG_BTRFS_ASSERT ... #endif block. Also changed offset arguments to be more strictly correct. V5: Updated histograms to reflect latest version of the code. V6: Use single assert macro and no more #ifdef CONFIG_BTRFS_ASSERT ... #endif logic, as suggested by Miao Xie. fs/btrfs/ctree.c | 39 +++++++++++++++++++++++++++++++++++++-- 1 file changed, 37 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c index 5fa521b..5f38157 100644 --- a/fs/btrfs/ctree.c +++ b/fs/btrfs/ctree.c @@ -2426,6 +2426,37 @@ done: return ret; } +static int key_search_validate(struct extent_buffer *b, + struct btrfs_key *key, + int level) +{ + struct btrfs_disk_key disk_key; + unsigned long offset; + + btrfs_cpu_key_to_disk(&disk_key, key); + + if (level == 0) + offset = offsetof(struct btrfs_leaf, items[0].key); + else + offset = offsetof(struct btrfs_node, ptrs[0].key); + + return !memcmp_extent_buffer(b, &disk_key, offset, sizeof(disk_key)); +} + +static int key_search(struct extent_buffer *b, struct btrfs_key *key, + int level, int *prev_cmp, int *slot) +{ + if (*prev_cmp != 0) { + *prev_cmp = bin_search(b, key, level, slot); + return *prev_cmp; + } + + ASSERT(key_search_validate(b, key, level)); + *slot = 0; + + return 0; +} + /* * look for key in the tree. path is filled in with nodes along the way * if key is found, we return zero and you can find the item in the leaf @@ -2454,6 +2485,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root int write_lock_level = 0; u8 lowest_level = 0; int min_write_lock_level; + int prev_cmp; lowest_level = p->lowest_level; WARN_ON(lowest_level && ins_len > 0); @@ -2484,6 +2516,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root min_write_lock_level = write_lock_level; again: + prev_cmp = -1; /* * we try very hard to do read locks on the root */ @@ -2584,7 +2617,7 @@ cow_done: if (!cow) btrfs_unlock_up_safe(p, level + 1); - ret = bin_search(b, key, level, &slot); + ret = key_search(b, key, level, &prev_cmp, &slot); if (level != 0) { int dec = 0; @@ -2719,6 +2752,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, int level; int lowest_unlock = 1; u8 lowest_level = 0; + int prev_cmp; lowest_level = p->lowest_level; WARN_ON(p->nodes[0] != NULL); @@ -2729,6 +2763,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, } again: + prev_cmp = -1; b = get_old_root(root, time_seq); level = btrfs_header_level(b); p->locks[level] = BTRFS_READ_LOCK; @@ -2746,7 +2781,7 @@ again: */ btrfs_unlock_up_safe(p, level + 1); - ret = bin_search(b, key, level, &slot); + ret = key_search(b, key, level, &prev_cmp, &slot); if (level != 0) { int dec = 0; -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Miao Xie
2013-Sep-01 07:21 UTC
Re: [PATCH v6] Btrfs: optimize key searches in btrfs_search_slot
On sat, 31 Aug 2013 13:54:56 +0100, Filipe David Borba Manana wrote:> When the binary search returns 0 (exact match), the target key > will necessarily be at slot 0 of all nodes below the current one, > so in this case the binary search is not needed because it will > always return 0, and we waste time doing it, holding node locks > for longer than necessary, etc. > > Below follow histograms with the times spent on the current approach of > doing a binary search when the previous binary search returned 0, and > times for the new approach, which directly picks the first item/child > node in the leaf/node. > > Current approach: > > Count: 6682 > Range: 35.000 - 8370.000; Mean: 85.837; Median: 75.000; Stddev: 106.429 > Percentiles: 90th: 124.000; 95th: 145.000; 99th: 206.000 > 35.000 - 61.080: 1235 ################ > 61.080 - 106.053: 4207 ##################################################### > 106.053 - 183.606: 1122 ############## > 183.606 - 317.341: 111 # > 317.341 - 547.959: 6 | > 547.959 - 8370.000: 1 | > > Approach proposed by this patch: > > Count: 6682 > Range: 6.000 - 135.000; Mean: 16.690; Median: 16.000; Stddev: 7.160 > Percentiles: 90th: 23.000; 95th: 27.000; 99th: 40.000 > 6.000 - 8.418: 58 # > 8.418 - 11.670: 1149 ######################### > 11.670 - 16.046: 2418 ##################################################### > 16.046 - 21.934: 2098 ############################################## > 21.934 - 29.854: 744 ################ > 29.854 - 40.511: 154 ### > 40.511 - 54.848: 41 # > 54.848 - 74.136: 5 | > 74.136 - 100.087: 9 | > 100.087 - 135.000: 6 | > > These samples were captured during a run of the btrfs tests 001, 002 and > 004 in the xfstests, with a leaf/node size of 4Kb. > > Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> > Signed-off-by: Josef Bacik <jbacik@fusionio.com> > --- > > V2: Simplified code, removed unnecessary code. > V3: Replaced BUG_ON() with the new ASSERT() from Josef. > V4: Addressed latest comments from Zach Brown and Josef Bacik. > Surrounded all code that is used for the assertion with a > #ifdef CONFIG_BTRFS_ASSERT ... #endif block. Also changed > offset arguments to be more strictly correct. > V5: Updated histograms to reflect latest version of the code. > V6: Use single assert macro and no more #ifdef CONFIG_BTRFS_ASSERT > ... #endif logic, as suggested by Miao Xie. > > fs/btrfs/ctree.c | 39 +++++++++++++++++++++++++++++++++++++-- > 1 file changed, 37 insertions(+), 2 deletions(-) > > diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c > index 5fa521b..5f38157 100644 > --- a/fs/btrfs/ctree.c > +++ b/fs/btrfs/ctree.c > @@ -2426,6 +2426,37 @@ done: > return ret; > } > > +static int key_search_validate(struct extent_buffer *b, > + struct btrfs_key *key, > + int level) > +{ > + struct btrfs_disk_key disk_key; > + unsigned long offset; > + > + btrfs_cpu_key_to_disk(&disk_key, key); > + > + if (level == 0) > + offset = offsetof(struct btrfs_leaf, items[0].key); > + else > + offset = offsetof(struct btrfs_node, ptrs[0].key); > + > + return !memcmp_extent_buffer(b, &disk_key, offset, sizeof(disk_key)); > +}Maybe I didn''t explain clearly in the previous mail, what I suggested was to move "#ifdef CONFIG_BTRFS_ASSERT" out of the function, not to remove it. The final code is: #ifdef CONFIG_BTRFS_ASSERT static int key_search_validate() { } #endif static int key_search() { ... ASSERT(key_search_validate(b, key, level)); ... } If there is no "#ifdef CONFIG_BTRFS_ASSERT" wrapper around key_search_validate(), the compiler will output the unused function warning if CONFIG_BTRFS_ASSERT is not set. Thanks Miao> + > +static int key_search(struct extent_buffer *b, struct btrfs_key *key, > + int level, int *prev_cmp, int *slot) > +{ > + if (*prev_cmp != 0) { > + *prev_cmp = bin_search(b, key, level, slot); > + return *prev_cmp; > + } > + > + ASSERT(key_search_validate(b, key, level)); > + *slot = 0; > + > + return 0; > +} > + > /* > * look for key in the tree. path is filled in with nodes along the way > * if key is found, we return zero and you can find the item in the leaf > @@ -2454,6 +2485,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root > int write_lock_level = 0; > u8 lowest_level = 0; > int min_write_lock_level; > + int prev_cmp; > > lowest_level = p->lowest_level; > WARN_ON(lowest_level && ins_len > 0); > @@ -2484,6 +2516,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root > min_write_lock_level = write_lock_level; > > again: > + prev_cmp = -1; > /* > * we try very hard to do read locks on the root > */ > @@ -2584,7 +2617,7 @@ cow_done: > if (!cow) > btrfs_unlock_up_safe(p, level + 1); > > - ret = bin_search(b, key, level, &slot); > + ret = key_search(b, key, level, &prev_cmp, &slot); > > if (level != 0) { > int dec = 0; > @@ -2719,6 +2752,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, > int level; > int lowest_unlock = 1; > u8 lowest_level = 0; > + int prev_cmp; > > lowest_level = p->lowest_level; > WARN_ON(p->nodes[0] != NULL); > @@ -2729,6 +2763,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, > } > > again: > + prev_cmp = -1; > b = get_old_root(root, time_seq); > level = btrfs_header_level(b); > p->locks[level] = BTRFS_READ_LOCK; > @@ -2746,7 +2781,7 @@ again: > */ > btrfs_unlock_up_safe(p, level + 1); > > - ret = bin_search(b, key, level, &slot); > + ret = key_search(b, key, level, &prev_cmp, &slot); > > if (level != 0) { > int dec = 0; >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Filipe David Manana
2013-Sep-01 10:32 UTC
Re: [PATCH v6] Btrfs: optimize key searches in btrfs_search_slot
On Sun, Sep 1, 2013 at 8:21 AM, Miao Xie <miaox@cn.fujitsu.com> wrote:> On sat, 31 Aug 2013 13:54:56 +0100, Filipe David Borba Manana wrote: >> When the binary search returns 0 (exact match), the target key >> will necessarily be at slot 0 of all nodes below the current one, >> so in this case the binary search is not needed because it will >> always return 0, and we waste time doing it, holding node locks >> for longer than necessary, etc. >> >> Below follow histograms with the times spent on the current approach of >> doing a binary search when the previous binary search returned 0, and >> times for the new approach, which directly picks the first item/child >> node in the leaf/node. >> >> Current approach: >> >> Count: 6682 >> Range: 35.000 - 8370.000; Mean: 85.837; Median: 75.000; Stddev: 106.429 >> Percentiles: 90th: 124.000; 95th: 145.000; 99th: 206.000 >> 35.000 - 61.080: 1235 ################ >> 61.080 - 106.053: 4207 ##################################################### >> 106.053 - 183.606: 1122 ############## >> 183.606 - 317.341: 111 # >> 317.341 - 547.959: 6 | >> 547.959 - 8370.000: 1 | >> >> Approach proposed by this patch: >> >> Count: 6682 >> Range: 6.000 - 135.000; Mean: 16.690; Median: 16.000; Stddev: 7.160 >> Percentiles: 90th: 23.000; 95th: 27.000; 99th: 40.000 >> 6.000 - 8.418: 58 # >> 8.418 - 11.670: 1149 ######################### >> 11.670 - 16.046: 2418 ##################################################### >> 16.046 - 21.934: 2098 ############################################## >> 21.934 - 29.854: 744 ################ >> 29.854 - 40.511: 154 ### >> 40.511 - 54.848: 41 # >> 54.848 - 74.136: 5 | >> 74.136 - 100.087: 9 | >> 100.087 - 135.000: 6 | >> >> These samples were captured during a run of the btrfs tests 001, 002 and >> 004 in the xfstests, with a leaf/node size of 4Kb. >> >> Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> >> Signed-off-by: Josef Bacik <jbacik@fusionio.com> >> --- >> >> V2: Simplified code, removed unnecessary code. >> V3: Replaced BUG_ON() with the new ASSERT() from Josef. >> V4: Addressed latest comments from Zach Brown and Josef Bacik. >> Surrounded all code that is used for the assertion with a >> #ifdef CONFIG_BTRFS_ASSERT ... #endif block. Also changed >> offset arguments to be more strictly correct. >> V5: Updated histograms to reflect latest version of the code. >> V6: Use single assert macro and no more #ifdef CONFIG_BTRFS_ASSERT >> ... #endif logic, as suggested by Miao Xie. >> >> fs/btrfs/ctree.c | 39 +++++++++++++++++++++++++++++++++++++-- >> 1 file changed, 37 insertions(+), 2 deletions(-) >> >> diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c >> index 5fa521b..5f38157 100644 >> --- a/fs/btrfs/ctree.c >> +++ b/fs/btrfs/ctree.c >> @@ -2426,6 +2426,37 @@ done: >> return ret; >> } >> >> +static int key_search_validate(struct extent_buffer *b, >> + struct btrfs_key *key, >> + int level) >> +{ >> + struct btrfs_disk_key disk_key; >> + unsigned long offset; >> + >> + btrfs_cpu_key_to_disk(&disk_key, key); >> + >> + if (level == 0) >> + offset = offsetof(struct btrfs_leaf, items[0].key); >> + else >> + offset = offsetof(struct btrfs_node, ptrs[0].key); >> + >> + return !memcmp_extent_buffer(b, &disk_key, offset, sizeof(disk_key)); >> +} > > Maybe I didn''t explain clearly in the previous mail, what I suggested was to > move "#ifdef CONFIG_BTRFS_ASSERT" out of the function, not to remove it. The final > code is: > > #ifdef CONFIG_BTRFS_ASSERT > static int key_search_validate() > { > } > #endif > > static int key_search() > { > ... > ASSERT(key_search_validate(b, key, level)); > ... > } > > If there is no "#ifdef CONFIG_BTRFS_ASSERT" wrapper around key_search_validate(), > the compiler will output the unused function warning if CONFIG_BTRFS_ASSERT is not set.Ok. I misunderstood what you meant before. If the goal is not to remove the #ifdef #endif, then honestly I''m not seeing what value the suggestion brings in compared to patch v5, as it seems purely a style preference (and highly subjective whether it''s better or not). Nevertheless I''m fine with it and hopefully everyone else will be too. thanks> > Thanks > Miao > >> + >> +static int key_search(struct extent_buffer *b, struct btrfs_key *key, >> + int level, int *prev_cmp, int *slot) >> +{ >> + if (*prev_cmp != 0) { >> + *prev_cmp = bin_search(b, key, level, slot); >> + return *prev_cmp; >> + } >> + >> + ASSERT(key_search_validate(b, key, level)); >> + *slot = 0; >> + >> + return 0; >> +} >> + >> /* >> * look for key in the tree. path is filled in with nodes along the way >> * if key is found, we return zero and you can find the item in the leaf >> @@ -2454,6 +2485,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root >> int write_lock_level = 0; >> u8 lowest_level = 0; >> int min_write_lock_level; >> + int prev_cmp; >> >> lowest_level = p->lowest_level; >> WARN_ON(lowest_level && ins_len > 0); >> @@ -2484,6 +2516,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root >> min_write_lock_level = write_lock_level; >> >> again: >> + prev_cmp = -1; >> /* >> * we try very hard to do read locks on the root >> */ >> @@ -2584,7 +2617,7 @@ cow_done: >> if (!cow) >> btrfs_unlock_up_safe(p, level + 1); >> >> - ret = bin_search(b, key, level, &slot); >> + ret = key_search(b, key, level, &prev_cmp, &slot); >> >> if (level != 0) { >> int dec = 0; >> @@ -2719,6 +2752,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, >> int level; >> int lowest_unlock = 1; >> u8 lowest_level = 0; >> + int prev_cmp; >> >> lowest_level = p->lowest_level; >> WARN_ON(p->nodes[0] != NULL); >> @@ -2729,6 +2763,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, >> } >> >> again: >> + prev_cmp = -1; >> b = get_old_root(root, time_seq); >> level = btrfs_header_level(b); >> p->locks[level] = BTRFS_READ_LOCK; >> @@ -2746,7 +2781,7 @@ again: >> */ >> btrfs_unlock_up_safe(p, level + 1); >> >> - ret = bin_search(b, key, level, &slot); >> + ret = key_search(b, key, level, &prev_cmp, &slot); >> >> if (level != 0) { >> int dec = 0; >> >-- Filipe David Manana, "Reasonable men adapt themselves to the world. Unreasonable men adapt the world to themselves. That''s why all progress depends on unreasonable men." -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Filipe David Borba Manana
2013-Sep-01 10:39 UTC
[PATCH v7] Btrfs: optimize key searches in btrfs_search_slot
When the binary search returns 0 (exact match), the target key will necessarily be at slot 0 of all nodes below the current one, so in this case the binary search is not needed because it will always return 0, and we waste time doing it, holding node locks for longer than necessary, etc. Below follow histograms with the times spent on the current approach of doing a binary search when the previous binary search returned 0, and times for the new approach, which directly picks the first item/child node in the leaf/node. Current approach: Count: 6682 Range: 35.000 - 8370.000; Mean: 85.837; Median: 75.000; Stddev: 106.429 Percentiles: 90th: 124.000; 95th: 145.000; 99th: 206.000 35.000 - 61.080: 1235 ################ 61.080 - 106.053: 4207 ##################################################### 106.053 - 183.606: 1122 ############## 183.606 - 317.341: 111 # 317.341 - 547.959: 6 | 547.959 - 8370.000: 1 | Approach proposed by this patch: Count: 6682 Range: 6.000 - 135.000; Mean: 16.690; Median: 16.000; Stddev: 7.160 Percentiles: 90th: 23.000; 95th: 27.000; 99th: 40.000 6.000 - 8.418: 58 # 8.418 - 11.670: 1149 ######################### 11.670 - 16.046: 2418 ##################################################### 16.046 - 21.934: 2098 ############################################## 21.934 - 29.854: 744 ################ 29.854 - 40.511: 154 ### 40.511 - 54.848: 41 # 54.848 - 74.136: 5 | 74.136 - 100.087: 9 | 100.087 - 135.000: 6 | These samples were captured during a run of the btrfs tests 001, 002 and 004 in the xfstests, with a leaf/node size of 4Kb. Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> Signed-off-by: Josef Bacik <jbacik@fusionio.com> --- V2: Simplified code, removed unnecessary code. V3: Replaced BUG_ON() with the new ASSERT() from Josef. V4: Addressed latest comments from Zach Brown and Josef Bacik. Surrounded all code that is used for the assertion with a #ifdef CONFIG_BTRFS_ASSERT ... #endif block. Also changed offset arguments to be more strictly correct. V5: Updated histograms to reflect latest version of the code. V6: Use single assert macro and no more #ifdef CONFIG_BTRFS_ASSERT ... #endif logic, as suggested by Miao Xie. V7: Added back the #ifdef ... #endif logic, to avoid compiler warning about unused function when CONFIG_BTRFS_ASSERT is not enabled. fs/btrfs/ctree.c | 41 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 39 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c index 5fa521b..4d602f7 100644 --- a/fs/btrfs/ctree.c +++ b/fs/btrfs/ctree.c @@ -2426,6 +2426,39 @@ done: return ret; } +#ifdef CONFIG_BTRFS_ASSERT +static int key_search_validate(struct extent_buffer *b, + struct btrfs_key *key, + int level) +{ + struct btrfs_disk_key disk_key; + unsigned long offset; + + btrfs_cpu_key_to_disk(&disk_key, key); + + if (level == 0) + offset = offsetof(struct btrfs_leaf, items[0].key); + else + offset = offsetof(struct btrfs_node, ptrs[0].key); + + return !memcmp_extent_buffer(b, &disk_key, offset, sizeof(disk_key)); +} +#endif + +static int key_search(struct extent_buffer *b, struct btrfs_key *key, + int level, int *prev_cmp, int *slot) +{ + if (*prev_cmp != 0) { + *prev_cmp = bin_search(b, key, level, slot); + return *prev_cmp; + } + + ASSERT(key_search_validate(b, key, level)); + *slot = 0; + + return 0; +} + /* * look for key in the tree. path is filled in with nodes along the way * if key is found, we return zero and you can find the item in the leaf @@ -2454,6 +2487,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root int write_lock_level = 0; u8 lowest_level = 0; int min_write_lock_level; + int prev_cmp; lowest_level = p->lowest_level; WARN_ON(lowest_level && ins_len > 0); @@ -2484,6 +2518,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root min_write_lock_level = write_lock_level; again: + prev_cmp = -1; /* * we try very hard to do read locks on the root */ @@ -2584,7 +2619,7 @@ cow_done: if (!cow) btrfs_unlock_up_safe(p, level + 1); - ret = bin_search(b, key, level, &slot); + ret = key_search(b, key, level, &prev_cmp, &slot); if (level != 0) { int dec = 0; @@ -2719,6 +2754,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, int level; int lowest_unlock = 1; u8 lowest_level = 0; + int prev_cmp; lowest_level = p->lowest_level; WARN_ON(p->nodes[0] != NULL); @@ -2729,6 +2765,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key, } again: + prev_cmp = -1; b = get_old_root(root, time_seq); level = btrfs_header_level(b); p->locks[level] = BTRFS_READ_LOCK; @@ -2746,7 +2783,7 @@ again: */ btrfs_unlock_up_safe(p, level + 1); - ret = bin_search(b, key, level, &slot); + ret = key_search(b, key, level, &prev_cmp, &slot); if (level != 0) { int dec = 0; -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
David Sterba
2013-Sep-02 13:39 UTC
Re: [PATCH v7] Btrfs: optimize key searches in btrfs_search_slot
On Sun, Sep 01, 2013 at 11:39:28AM +0100, Filipe David Borba Manana wrote:> +#ifdef CONFIG_BTRFS_ASSERT > +static int key_search_validate(struct extent_buffer *b, > + struct btrfs_key *key, > + int level) > +{...> +} > +#endif > + > +static int key_search(struct extent_buffer *b, struct btrfs_key *key, > + int level, int *prev_cmp, int *slot) > +{ > + if (*prev_cmp != 0) { > + *prev_cmp = bin_search(b, key, level, slot); > + return *prev_cmp; > + } > + > + ASSERT(key_search_validate(b, key, level));But what if I want to use key_search_validate out of the context of an ASSERT ? I don''t see a reason why the function needs to be under #ifdef BTRFS_ASSERT / #endif at all.> + *slot = 0; > + > + return 0; > +}-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Filipe David Manana
2013-Sep-02 14:40 UTC
Re: [PATCH v7] Btrfs: optimize key searches in btrfs_search_slot
On Mon, Sep 2, 2013 at 2:39 PM, David Sterba <dsterba@suse.cz> wrote:> On Sun, Sep 01, 2013 at 11:39:28AM +0100, Filipe David Borba Manana wrote: >> +#ifdef CONFIG_BTRFS_ASSERT >> +static int key_search_validate(struct extent_buffer *b, >> + struct btrfs_key *key, >> + int level) >> +{ > ... >> +} >> +#endif >> + >> +static int key_search(struct extent_buffer *b, struct btrfs_key *key, >> + int level, int *prev_cmp, int *slot) >> +{ >> + if (*prev_cmp != 0) { >> + *prev_cmp = bin_search(b, key, level, slot); >> + return *prev_cmp; >> + } >> + >> + ASSERT(key_search_validate(b, key, level)); > > But what if I want to use key_search_validate out of the context of an > ASSERT ?Right. But right now nothing else uses it. Shall the need for it come, it''s trivial to address.> I don''t see a reason why the function needs to be under #ifdef > BTRFS_ASSERT / #endif at all.To avoid the compiler warning, as mentioned before. Between patch versions v5 to v7, I don''t have any strong preference. All have correct, small and simple code.> >> + *slot = 0; >> + >> + return 0; >> +}-- Filipe David Manana, "Reasonable men adapt themselves to the world. Unreasonable men adapt the world to themselves. That''s why all progress depends on unreasonable men." -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
David Sterba
2013-Sep-02 14:52 UTC
Re: [PATCH v7] Btrfs: optimize key searches in btrfs_search_slot
On Mon, Sep 02, 2013 at 03:40:39PM +0100, Filipe David Manana wrote:> Between patch versions v5 to v7, I don''t have any strong preference. > All have correct, small and simple code.I''m ok with v7. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Maybe Matching Threads
- [PATCH 0 of 3 v2] PV-GRUB: add support for ext4 and btrfs
- [RFC PATCH V2] Btrfs: introduce extent buffer cache for each i-node
- [PATCH] Btrfs: fix passing wrong arg gfp_t to decide the correct allocation mode
- [PATCH] ctree code cleanups
- [patch 00/65] Error handling patchset v3