Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 00/18] Replace btrfs_workers with kernel workqueue based btrfs_workqueue
Add a new btrfs_workqueue_struct which use kernel workqueue to implement most of the original btrfs_workers, to replace btrfs_workers. With this patchset, redundant workqueue codes are replaced with kernel workqueue infrastructure, which not only reduces the code size but also the effort to maintain it. The result from sysbench shows minor improvement on the following server: CPU: two-way Xeon X5660 RAM: 4G HDD: SAS HDD, 150G total, 100G partition for btrfs test Test result on default mount option: https://docs.google.com/spreadsheet/ccc?key=0AhpkL3ehzX3pdENjajJTWFg5d1BWbExnYWFpMTJxeUE&usp=sharing Test result on "-o compress" mount option: https://docs.google.com/spreadsheet/ccc?key=0AhpkL3ehzX3pdHdTTEJ6OW96SXJFaDR5enB1SzMzc0E&usp=sharing Changelog: v1->v2: - Fix some workqueue flags. v2->v3: - Add the thresholding mechanism to simulate the old behavior - Convert all the btrfs_workers to btrfs_workrqueue_struct. - Fix some potential deadlock when executed in IRQ handler. v3->v4: - Change the ordered workqueue implement to fix the performance drop in 32K multi thread random write. - Change the high priority workqueue implement to get an independent high workqueue without starving problem. - Simplify the btrfs_alloc_workqueue parameters. - Coding style cleanup. - Remove the redundant "_struct" suffix. Qu Wenruo (18): btrfs: Cleanup the unused struct async_sched. btrfs: Added btrfs_workqueue_struct implemented ordered execution based on kernel workqueue btrfs: Add high priority workqueue support for btrfs_workqueue_struct btrfs: Add threshold workqueue based on kernel workqueue btrfs: Replace fs_info->workers with btrfs_workqueue. btrfs: Replace fs_info->delalloc_workers with btrfs_workqueue btrfs: Replace fs_info->submit_workers with btrfs_workqueue. btrfs: Replace fs_info->flush_workers with btrfs_workqueue. btrfs: Replace fs_info->endio_* workqueue with btrfs_workqueue. btrfs: Replace fs_info->rmw_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info->cache_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info->readahead_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info->fixup_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info->delayed_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info->qgroup_rescan_worker workqueue with btrfs_workqueue. btrfs: Replace fs_info->scrub_* workqueue with btrfs_workqueue. btrfs: Cleanup the old btrfs_worker. btrfs: Cleanup the "_struct" suffix in btrfs_workequeue fs/btrfs/async-thread.c | 821 ++++++++++++----------------------------------- fs/btrfs/async-thread.h | 117 ++----- fs/btrfs/ctree.h | 39 ++- fs/btrfs/delayed-inode.c | 6 +- fs/btrfs/disk-io.c | 212 +++++------- fs/btrfs/extent-tree.c | 4 +- fs/btrfs/inode.c | 38 +-- fs/btrfs/ordered-data.c | 11 +- fs/btrfs/qgroup.c | 15 +- fs/btrfs/raid56.c | 21 +- fs/btrfs/reada.c | 4 +- fs/btrfs/scrub.c | 70 ++-- fs/btrfs/super.c | 36 +-- fs/btrfs/volumes.c | 16 +- 14 files changed, 430 insertions(+), 980 deletions(-) -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 01/18] btrfs: Cleanup the unused struct async_sched.
The struct async_sched is not used by any codes and can be removed. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Reviewed-by: Josef Bacik <jbacik@fusionio.com> --- Changelog: v1->v2: None. v2->v3: None. v3->v4: None: --- fs/btrfs/volumes.c | 7 ------- 1 file changed, 7 deletions(-) diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index 92303f4..c63ed39 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -5322,13 +5322,6 @@ static void btrfs_end_bio(struct bio *bio, int err) } } -struct async_sched { - struct bio *bio; - int rw; - struct btrfs_fs_info *info; - struct btrfs_work work; -}; - /* * see run_scheduled_bios for a description of why bios are collected for * async submit. -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 02/18] btrfs: Added btrfs_workqueue_struct implemented ordered execution based on kernel workqueue
Use kernel workqueue to implement a new btrfs_workqueue_struct, which has the ordering execution feature like the btrfs_worker. The func is executed in a concurrency way, and the ordred_func/ordered_free is executed in the sequence them are queued after the corresponding func is done. The new btrfs_workqueue works much like the original one, one workqueue for normal work and a list for ordered work. When a work is queued, ordered work will be added to the list and helper function will be queued into the workqueue. The helper function will execute a normal work and then check and execute as many ordered work as possible in the sequence they were queued. At this patch, high priority work queue or thresholding is not added yet. The high priority feature and thresholding will be added in the following patches. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Cc: Josef Bacik <jbacik@fusionio.com> --- Changelog: v1->v2: None. v2->v3: - Fix the potential deadline discovered by kernel lockdep. - Reuse the async-thread.[ch] files. - Make the ordered_func optional, which makes it adaptable to all btrfs_workers. v3->v4: - Use the old list method to implement ordered workqueue. Previous 3 wq implement needs extra time waiting for scheduling, which caused up to 40% performance drop in compress tests. The old list method(after executing a normal work, check the order_list and executing) does not need the extra scheduling things. - Simplify the btrfs_alloc_workqueue parameters. Now only one name is needed, and ordered work mechanism is determined using work->ordered_func. - Fix memory leak in btrfs_destroy_workqueue. --- fs/btrfs/async-thread.c | 130 ++++++++++++++++++++++++++++++++++++++++++++++++ fs/btrfs/async-thread.h | 27 ++++++++++ 2 files changed, 157 insertions(+) diff --git a/fs/btrfs/async-thread.c b/fs/btrfs/async-thread.c index c1e0b0c..f05d57e 100644 --- a/fs/btrfs/async-thread.c +++ b/fs/btrfs/async-thread.c @@ -1,5 +1,6 @@ /* * Copyright (C) 2007 Oracle. All rights reserved. + * Copyright (C) 2013 Fujitsu. All rights reserved. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public @@ -21,6 +22,7 @@ #include <linux/list.h> #include <linux/spinlock.h> #include <linux/freezer.h> +#include <linux/workqueue.h> #include "async-thread.h" #define WORK_QUEUED_BIT 0 @@ -726,3 +728,131 @@ void btrfs_queue_worker(struct btrfs_workers *workers, struct btrfs_work *work) wake_up_process(worker->task); spin_unlock_irqrestore(&worker->lock, flags); } + +struct btrfs_workqueue_struct { + struct workqueue_struct *normal_wq; + /* List head pointing to ordered work list */ + struct list_head ordered_list; + + /* Spinlock for ordered_list */ + spinlock_t list_lock; +}; + +struct btrfs_workqueue_struct *btrfs_alloc_workqueue(char *name, + int flags, + int max_active) +{ + struct btrfs_workqueue_struct *ret = kzalloc(sizeof(*ret), GFP_NOFS); + + if (unlikely(!ret)) + return NULL; + + ret->normal_wq = alloc_workqueue("%s-%s", flags, max_active, + "btrfs", name); + if (unlikely(!ret->normal_wq)) { + kfree(ret); + return NULL; + } + + INIT_LIST_HEAD(&ret->ordered_list); + spin_lock_init(&ret->list_lock); + return ret; +} + +static void run_ordered_work(struct btrfs_workqueue_struct *wq) +{ + struct list_head *list = &wq->ordered_list; + struct btrfs_work_struct *work; + spinlock_t *lock = &wq->list_lock; + unsigned long flags; + + while (1) { + spin_lock_irqsave(lock, flags); + if (list_empty(list)) + break; + work = list_entry(list->next, struct btrfs_work_struct, + ordered_list); + if (!test_bit(WORK_DONE_BIT, &work->flags)) + break; + + /* + * we are going to call the ordered done function, but + * we leave the work item on the list as a barrier so + * that later work items that are done don''t have their + * functions called before this one returns + */ + if (test_and_set_bit(WORK_ORDER_DONE_BIT, &work->flags)) + break; + spin_unlock_irqrestore(lock, flags); + work->ordered_func(work); + + /* now take the lock again and drop our item from the list */ + spin_lock_irqsave(lock, flags); + list_del(&work->ordered_list); + spin_unlock_irqrestore(lock, flags); + + /* + * we don''t want to call the ordered free functions + * with the lock held though + */ + work->ordered_free(work); + } + spin_unlock_irqrestore(lock, flags); +} + +static void normal_work_helper(struct work_struct *arg) +{ + struct btrfs_work_struct *work; + int need_order = 0; + + work = container_of(arg, struct btrfs_work_struct, normal_work); + /* + * Some work may free the whole struct, we should check before + * executing the func. + */ + if (work->ordered_func) + need_order = 1; + work->func(work); + if (need_order) { + set_bit(WORK_DONE_BIT, &work->flags); + run_ordered_work(work->wq); + } +} + +void btrfs_init_work(struct btrfs_work_struct *work, + void (*func)(struct btrfs_work_struct *), + void (*ordered_func)(struct btrfs_work_struct *), + void (*ordered_free)(struct btrfs_work_struct *)) +{ + work->func = func; + work->ordered_func = ordered_func; + work->ordered_free = ordered_free; + INIT_WORK(&work->normal_work, normal_work_helper); + INIT_LIST_HEAD(&work->ordered_list); + work->flags = 0; +} + +void btrfs_queue_work(struct btrfs_workqueue_struct *wq, + struct btrfs_work_struct *work) +{ + unsigned long flags; + + work->wq = wq; + if (work->ordered_func) { + spin_lock_irqsave(&wq->list_lock, flags); + list_add_tail(&work->ordered_list, &wq->ordered_list); + spin_unlock_irqrestore(&wq->list_lock, flags); + } + queue_work(wq->normal_wq, &work->normal_work); +} + +void btrfs_destroy_workqueue(struct btrfs_workqueue_struct *wq) +{ + destroy_workqueue(wq->normal_wq); + kfree(wq); +} + +void btrfs_workqueue_set_max(struct btrfs_workqueue_struct *wq, int max) +{ + workqueue_set_max_active(wq->normal_wq, max); +} diff --git a/fs/btrfs/async-thread.h b/fs/btrfs/async-thread.h index 1f26792..1d16156 100644 --- a/fs/btrfs/async-thread.h +++ b/fs/btrfs/async-thread.h @@ -1,5 +1,6 @@ /* * Copyright (C) 2007 Oracle. All rights reserved. + * Copyright (C) 2013 Fujitsu. All rights reserved. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public @@ -118,4 +119,30 @@ void btrfs_init_workers(struct btrfs_workers *workers, char *name, int max, struct btrfs_workers *async_starter); void btrfs_requeue_work(struct btrfs_work *work); void btrfs_set_work_high_prio(struct btrfs_work *work); + +struct btrfs_workqueue_struct; + +struct btrfs_work_struct { + void (*func)(struct btrfs_work_struct *arg); + void (*ordered_func)(struct btrfs_work_struct *arg); + void (*ordered_free)(struct btrfs_work_struct *arg); + + /* Don''t touch things below */ + struct work_struct normal_work; + struct list_head ordered_list; + struct btrfs_workqueue_struct *wq; + unsigned long flags; +}; + +struct btrfs_workqueue_struct *btrfs_alloc_workqueue(char *name, + int flags, + int max_active); +void btrfs_init_work(struct btrfs_work_struct *work, + void (*func)(struct btrfs_work_struct *), + void (*ordered_func)(struct btrfs_work_struct *), + void (*ordered_free)(struct btrfs_work_struct *)); +void btrfs_queue_work(struct btrfs_workqueue_struct *wq, + struct btrfs_work_struct *work); +void btrfs_destroy_workqueue(struct btrfs_workqueue_struct *wq); +void btrfs_workqueue_set_max(struct btrfs_workqueue_struct *wq, int max); #endif -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 03/18] btrfs: Add high priority workqueue support for btrfs_workqueue_struct
Add high priority function to btrfs_workqueue. This is implemented by embedding a btrfs_workqueue into a btrfs_workqueue and use some helper functions to differ the normal priority wq and high priority wq. So the high priority wq is completely independent from the normal workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> --- Changelog: v1->v2: None v2->v3: None v3->v4: - Implement high priority workqueue independently. Now high priority wq is implemented as a normal btrfs_workqueue, with independent ordering/thresholding mechanism. This fixed the problem that high priority wq and normal wq shared one ordered wq. --- fs/btrfs/async-thread.c | 89 +++++++++++++++++++++++++++++++++++++++++++------ fs/btrfs/async-thread.h | 5 ++- 2 files changed, 82 insertions(+), 12 deletions(-) diff --git a/fs/btrfs/async-thread.c b/fs/btrfs/async-thread.c index f05d57e..73b9f94 100644 --- a/fs/btrfs/async-thread.c +++ b/fs/btrfs/async-thread.c @@ -729,7 +729,7 @@ void btrfs_queue_worker(struct btrfs_workers *workers, struct btrfs_work *work) spin_unlock_irqrestore(&worker->lock, flags); } -struct btrfs_workqueue_struct { +struct __btrfs_workqueue_struct { struct workqueue_struct *normal_wq; /* List head pointing to ordered work list */ struct list_head ordered_list; @@ -738,6 +738,38 @@ struct btrfs_workqueue_struct { spinlock_t list_lock; }; +struct btrfs_workqueue_struct { + struct __btrfs_workqueue_struct *normal; + struct __btrfs_workqueue_struct *high; +}; + +static inline struct __btrfs_workqueue_struct +*__btrfs_alloc_workqueue(char *name, int flags, int max_active) +{ + struct __btrfs_workqueue_struct *ret = kzalloc(sizeof(*ret), GFP_NOFS); + + if (unlikely(!ret)) + return NULL; + + if (flags & WQ_HIGHPRI) + ret->normal_wq = alloc_workqueue("%s-%s-high", flags, + max_active, "btrfs", name); + else + ret->normal_wq = alloc_workqueue("%s-%s", flags, + max_active, "btrfs", name); + if (unlikely(!ret->normal_wq)) { + kfree(ret); + return NULL; + } + + INIT_LIST_HEAD(&ret->ordered_list); + spin_lock_init(&ret->list_lock); + return ret; +} + +static inline void +__btrfs_destroy_workqueue(struct __btrfs_workqueue_struct *wq); + struct btrfs_workqueue_struct *btrfs_alloc_workqueue(char *name, int flags, int max_active) @@ -747,19 +779,25 @@ struct btrfs_workqueue_struct *btrfs_alloc_workqueue(char *name, if (unlikely(!ret)) return NULL; - ret->normal_wq = alloc_workqueue("%s-%s", flags, max_active, - "btrfs", name); - if (unlikely(!ret->normal_wq)) { + ret->normal = __btrfs_alloc_workqueue(name, flags & ~WQ_HIGHPRI, + max_active); + if (unlikely(!ret->normal)) { kfree(ret); return NULL; } - INIT_LIST_HEAD(&ret->ordered_list); - spin_lock_init(&ret->list_lock); + if (flags & WQ_HIGHPRI) { + ret->high = __btrfs_alloc_workqueue(name, flags, max_active); + if (unlikely(!ret->high)) { + __btrfs_destroy_workqueue(ret->normal); + kfree(ret); + return NULL; + } + } return ret; } -static void run_ordered_work(struct btrfs_workqueue_struct *wq) +static void run_ordered_work(struct __btrfs_workqueue_struct *wq) { struct list_head *list = &wq->ordered_list; struct btrfs_work_struct *work; @@ -832,8 +870,8 @@ void btrfs_init_work(struct btrfs_work_struct *work, work->flags = 0; } -void btrfs_queue_work(struct btrfs_workqueue_struct *wq, - struct btrfs_work_struct *work) +static inline void __btrfs_queue_work(struct __btrfs_workqueue_struct *wq, + struct btrfs_work_struct *work) { unsigned long flags; @@ -846,13 +884,42 @@ void btrfs_queue_work(struct btrfs_workqueue_struct *wq, queue_work(wq->normal_wq, &work->normal_work); } -void btrfs_destroy_workqueue(struct btrfs_workqueue_struct *wq) +void btrfs_queue_work(struct btrfs_workqueue_struct *wq, + struct btrfs_work_struct *work) +{ + struct __btrfs_workqueue_struct *dest_wq; + + if (test_bit(WORK_HIGH_PRIO_BIT, &work->flags) && wq->high) + dest_wq = wq->high; + else + dest_wq = wq->normal; + __btrfs_queue_work(dest_wq, work); +} + +static inline void +__btrfs_destroy_workqueue(struct __btrfs_workqueue_struct *wq) { destroy_workqueue(wq->normal_wq); kfree(wq); } +void btrfs_destroy_workqueue(struct btrfs_workqueue_struct *wq) +{ + if (!wq) + return; + if (wq->high) + __btrfs_destroy_workqueue(wq->high); + __btrfs_destroy_workqueue(wq->normal); +} + void btrfs_workqueue_set_max(struct btrfs_workqueue_struct *wq, int max) { - workqueue_set_max_active(wq->normal_wq, max); + workqueue_set_max_active(wq->normal->normal_wq, max); + if (wq->high) + workqueue_set_max_active(wq->high->normal_wq, max); +} + +void btrfs_set_work_high_priority(struct btrfs_work_struct *work) +{ + set_bit(WORK_HIGH_PRIO_BIT, &work->flags); } diff --git a/fs/btrfs/async-thread.h b/fs/btrfs/async-thread.h index 1d16156..62cd19d 100644 --- a/fs/btrfs/async-thread.h +++ b/fs/btrfs/async-thread.h @@ -121,6 +121,8 @@ void btrfs_requeue_work(struct btrfs_work *work); void btrfs_set_work_high_prio(struct btrfs_work *work); struct btrfs_workqueue_struct; +/* Internal use only */ +struct __btrfs_workqueue_struct; struct btrfs_work_struct { void (*func)(struct btrfs_work_struct *arg); @@ -130,7 +132,7 @@ struct btrfs_work_struct { /* Don''t touch things below */ struct work_struct normal_work; struct list_head ordered_list; - struct btrfs_workqueue_struct *wq; + struct __btrfs_workqueue_struct *wq; unsigned long flags; }; @@ -145,4 +147,5 @@ void btrfs_queue_work(struct btrfs_workqueue_struct *wq, struct btrfs_work_struct *work); void btrfs_destroy_workqueue(struct btrfs_workqueue_struct *wq); void btrfs_workqueue_set_max(struct btrfs_workqueue_struct *wq, int max); +void btrfs_set_work_high_priority(struct btrfs_work_struct *work); #endif -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 04/18] btrfs: Add threshold workqueue based on kernel workqueue
The original btrfs_workers has thresholding functions to dynamically create or destroy kthreads. Though there is no such function in kernel workqueue because the worker is not created manually, we can still use the workqueue_set_max_active to simulated the behavior, mainly to achieve a better HDD performance by setting a high threshold on submit_workers. (Sadly, no resource can be saved) So in this patch, extra workqueue pending counters are introduced to dynamically change the max active of each btrfs_workqueue_struct, hoping to restore the behavior of the original thresholding function. Also, workqueue_set_max_active use a mutex to protect workqueue_struct, which is not meant to be called too frequently, so a new interval mechanism is applied, that will only call workqueue_set_max_active after a count of work is queued. Hoping to balance both the random and sequence performance on HDD. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> --- Changelog: v2->v3: - Add thresholding mechanism to simulate the old thresholding mechanism. - Will not enable thresholding when thresh is set to small value. v3->v4: None --- fs/btrfs/async-thread.c | 107 ++++++++++++++++++++++++++++++++++++++++++++---- fs/btrfs/async-thread.h | 3 +- 2 files changed, 101 insertions(+), 9 deletions(-) diff --git a/fs/btrfs/async-thread.c b/fs/btrfs/async-thread.c index 73b9f94..a986be7 100644 --- a/fs/btrfs/async-thread.c +++ b/fs/btrfs/async-thread.c @@ -30,6 +30,9 @@ #define WORK_ORDER_DONE_BIT 2 #define WORK_HIGH_PRIO_BIT 3 +#define NO_THRESHOLD (-1) +#define DFT_THRESHOLD (32) + /* * container for the kthread task pointer and the list of pending work * One of these is allocated per thread. @@ -736,6 +739,14 @@ struct __btrfs_workqueue_struct { /* Spinlock for ordered_list */ spinlock_t list_lock; + + /* Thresholding related variants */ + atomic_t pending; + int max_active; + int current_max; + int thresh; + unsigned int count; + spinlock_t thres_lock; }; struct btrfs_workqueue_struct { @@ -744,19 +755,34 @@ struct btrfs_workqueue_struct { }; static inline struct __btrfs_workqueue_struct -*__btrfs_alloc_workqueue(char *name, int flags, int max_active) +*__btrfs_alloc_workqueue(char *name, int flags, int max_active, int thresh) { struct __btrfs_workqueue_struct *ret = kzalloc(sizeof(*ret), GFP_NOFS); if (unlikely(!ret)) return NULL; + ret->max_active = max_active; + atomic_set(&ret->pending, 0); + if (thresh == 0) + thresh = DFT_THRESHOLD; + /* For low threshold, disabling threshold is a better choice */ + if (thresh < DFT_THRESHOLD) { + ret->current_max = max_active; + ret->thresh = NO_THRESHOLD; + } else { + ret->current_max = 1; + ret->thresh = thresh; + } + if (flags & WQ_HIGHPRI) ret->normal_wq = alloc_workqueue("%s-%s-high", flags, - max_active, "btrfs", name); + ret->max_active, + "btrfs", name); else ret->normal_wq = alloc_workqueue("%s-%s", flags, - max_active, "btrfs", name); + ret->max_active, "btrfs", + name); if (unlikely(!ret->normal_wq)) { kfree(ret); return NULL; @@ -764,6 +790,7 @@ static inline struct __btrfs_workqueue_struct INIT_LIST_HEAD(&ret->ordered_list); spin_lock_init(&ret->list_lock); + spin_lock_init(&ret->thres_lock); return ret; } @@ -772,7 +799,8 @@ __btrfs_destroy_workqueue(struct __btrfs_workqueue_struct *wq); struct btrfs_workqueue_struct *btrfs_alloc_workqueue(char *name, int flags, - int max_active) + int max_active, + int thresh) { struct btrfs_workqueue_struct *ret = kzalloc(sizeof(*ret), GFP_NOFS); @@ -780,14 +808,15 @@ struct btrfs_workqueue_struct *btrfs_alloc_workqueue(char *name, return NULL; ret->normal = __btrfs_alloc_workqueue(name, flags & ~WQ_HIGHPRI, - max_active); + max_active, thresh); if (unlikely(!ret->normal)) { kfree(ret); return NULL; } if (flags & WQ_HIGHPRI) { - ret->high = __btrfs_alloc_workqueue(name, flags, max_active); + ret->high = __btrfs_alloc_workqueue(name, flags, max_active, + thresh); if (unlikely(!ret->high)) { __btrfs_destroy_workqueue(ret->normal); kfree(ret); @@ -797,6 +826,66 @@ struct btrfs_workqueue_struct *btrfs_alloc_workqueue(char *name, return ret; } +/* + * Hook for threshold which will be called in btrfs_queue_work. + * This hook WILL be called in IRQ handler context, + * so workqueue_set_max_active MUST NOT be called in this hook + */ +static inline void thresh_queue_hook(struct __btrfs_workqueue_struct *wq) +{ + if (wq->thresh == NO_THRESHOLD) + return; + atomic_inc(&wq->pending); +} + +/* + * Hook for threshold which will be called before executing the work, + * This hook is called in kthread content. + * So workqueue_set_max_active is called here. + */ +static inline void thresh_exec_hook(struct __btrfs_workqueue_struct *wq) +{ + int new_max_active; + long pending; + int need_change = 0; + + if (wq->thresh == NO_THRESHOLD) + return; + + atomic_dec(&wq->pending); + spin_lock(&wq->thres_lock); + /* + * Use wq->count to limit the calling frequency of + * workqueue_set_max_active. + */ + wq->count++; + wq->count %= (wq->thresh / 4); + if (!wq->count) + goto out; + new_max_active = wq->current_max; + + /* + * pending may be changed later, but it''s OK since we really + * don''t need it so accurate to calculate new_max_active. + */ + pending = atomic_read(&wq->pending); + if (pending > wq->thresh) + new_max_active++; + if (pending < wq->thresh / 2) + new_max_active--; + new_max_active = clamp_val(new_max_active, 1, wq->max_active); + if (new_max_active != wq->current_max) { + need_change = 1; + wq->current_max = new_max_active; + } +out: + spin_unlock(&wq->thres_lock); + + if (need_change) { + workqueue_set_max_active(wq->normal_wq, wq->current_max); + } +} + static void run_ordered_work(struct __btrfs_workqueue_struct *wq) { struct list_head *list = &wq->ordered_list; @@ -850,6 +939,7 @@ static void normal_work_helper(struct work_struct *arg) */ if (work->ordered_func) need_order = 1; + thresh_exec_hook(work->wq); work->func(work); if (need_order) { set_bit(WORK_DONE_BIT, &work->flags); @@ -876,6 +966,7 @@ static inline void __btrfs_queue_work(struct __btrfs_workqueue_struct *wq, unsigned long flags; work->wq = wq; + thresh_queue_hook(wq); if (work->ordered_func) { spin_lock_irqsave(&wq->list_lock, flags); list_add_tail(&work->ordered_list, &wq->ordered_list); @@ -914,9 +1005,9 @@ void btrfs_destroy_workqueue(struct btrfs_workqueue_struct *wq) void btrfs_workqueue_set_max(struct btrfs_workqueue_struct *wq, int max) { - workqueue_set_max_active(wq->normal->normal_wq, max); + wq->normal->max_active = max; if (wq->high) - workqueue_set_max_active(wq->high->normal_wq, max); + wq->high->max_active = max; } void btrfs_set_work_high_priority(struct btrfs_work_struct *work) diff --git a/fs/btrfs/async-thread.h b/fs/btrfs/async-thread.h index 62cd19d..eec4a0f 100644 --- a/fs/btrfs/async-thread.h +++ b/fs/btrfs/async-thread.h @@ -138,7 +138,8 @@ struct btrfs_work_struct { struct btrfs_workqueue_struct *btrfs_alloc_workqueue(char *name, int flags, - int max_active); + int max_active, + int thresh); void btrfs_init_work(struct btrfs_work_struct *work, void (*func)(struct btrfs_work_struct *), void (*ordered_func)(struct btrfs_work_struct *), -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 05/18] btrfs: Replace fs_info->workers with btrfs_workqueue.
Use the newly created btrfs_workqueue_struct to replace the original fs_info->workers Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> --- Changelog: v1->v2: None v2->v3: None v3->v4: - Use the simplified btrfs_alloc_workqueue API. --- fs/btrfs/ctree.h | 2 +- fs/btrfs/disk-io.c | 41 +++++++++++++++++++++-------------------- fs/btrfs/super.c | 2 +- 3 files changed, 23 insertions(+), 22 deletions(-) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 54ab861..b3093c3 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1489,7 +1489,7 @@ struct btrfs_fs_info { * two */ struct btrfs_workers generic_worker; - struct btrfs_workers workers; + struct btrfs_workqueue_struct *workers; struct btrfs_workers delalloc_workers; struct btrfs_workers flush_workers; struct btrfs_workers endio_workers; diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 8072cfa..258c59a 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -107,7 +107,7 @@ struct async_submit_bio { * can''t tell us where in the file the bio should go */ u64 bio_offset; - struct btrfs_work work; + struct btrfs_work_struct work; int error; }; @@ -741,12 +741,12 @@ int btrfs_bio_wq_end_io(struct btrfs_fs_info *info, struct bio *bio, unsigned long btrfs_async_submit_limit(struct btrfs_fs_info *info) { unsigned long limit = min_t(unsigned long, - info->workers.max_workers, + info->thread_pool_size, info->fs_devices->open_devices); return 256 * limit; } -static void run_one_async_start(struct btrfs_work *work) +static void run_one_async_start(struct btrfs_work_struct *work) { struct async_submit_bio *async; int ret; @@ -759,7 +759,7 @@ static void run_one_async_start(struct btrfs_work *work) async->error = ret; } -static void run_one_async_done(struct btrfs_work *work) +static void run_one_async_done(struct btrfs_work_struct *work) { struct btrfs_fs_info *fs_info; struct async_submit_bio *async; @@ -786,7 +786,7 @@ static void run_one_async_done(struct btrfs_work *work) async->bio_offset); } -static void run_one_async_free(struct btrfs_work *work) +static void run_one_async_free(struct btrfs_work_struct *work) { struct async_submit_bio *async; @@ -814,11 +814,9 @@ int btrfs_wq_submit_bio(struct btrfs_fs_info *fs_info, struct inode *inode, async->submit_bio_start = submit_bio_start; async->submit_bio_done = submit_bio_done; - async->work.func = run_one_async_start; - async->work.ordered_func = run_one_async_done; - async->work.ordered_free = run_one_async_free; + btrfs_init_work(&async->work, run_one_async_start, + run_one_async_done, run_one_async_free); - async->work.flags = 0; async->bio_flags = bio_flags; async->bio_offset = bio_offset; @@ -827,9 +825,9 @@ int btrfs_wq_submit_bio(struct btrfs_fs_info *fs_info, struct inode *inode, atomic_inc(&fs_info->nr_async_submits); if (rw & REQ_SYNC) - btrfs_set_work_high_prio(&async->work); + btrfs_set_work_high_priority(&async->work); - btrfs_queue_worker(&fs_info->workers, &async->work); + btrfs_queue_work(fs_info->workers, &async->work); while (atomic_read(&fs_info->async_submit_draining) && atomic_read(&fs_info->nr_async_submits)) { @@ -2011,7 +2009,7 @@ static void btrfs_stop_all_workers(struct btrfs_fs_info *fs_info) btrfs_stop_workers(&fs_info->generic_worker); btrfs_stop_workers(&fs_info->fixup_workers); btrfs_stop_workers(&fs_info->delalloc_workers); - btrfs_stop_workers(&fs_info->workers); + btrfs_destroy_workqueue(fs_info->workers); btrfs_stop_workers(&fs_info->endio_workers); btrfs_stop_workers(&fs_info->endio_meta_workers); btrfs_stop_workers(&fs_info->endio_raid56_workers); @@ -2109,6 +2107,8 @@ int open_ctree(struct super_block *sb, int err = -EINVAL; int num_backups_tried = 0; int backup_index = 0; + int max_active; + int flags = WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_UNBOUND; bool create_uuid_tree; bool check_uuid_tree; @@ -2468,12 +2468,13 @@ int open_ctree(struct super_block *sb, goto fail_alloc; } + max_active = fs_info->thread_pool_size; btrfs_init_workers(&fs_info->generic_worker, "genwork", 1, NULL); - btrfs_init_workers(&fs_info->workers, "worker", - fs_info->thread_pool_size, - &fs_info->generic_worker); + fs_info->workers + btrfs_alloc_workqueue("worker", flags | WQ_HIGHPRI, + max_active, 16); btrfs_init_workers(&fs_info->delalloc_workers, "delalloc", fs_info->thread_pool_size, NULL); @@ -2494,9 +2495,6 @@ int open_ctree(struct super_block *sb, */ fs_info->submit_workers.idle_thresh = 64; - fs_info->workers.idle_thresh = 16; - fs_info->workers.ordered = 1; - fs_info->delalloc_workers.idle_thresh = 2; fs_info->delalloc_workers.ordered = 1; @@ -2548,8 +2546,7 @@ int open_ctree(struct super_block *sb, * btrfs_start_workers can really only fail because of ENOMEM so just * return -ENOMEM if any of these fail. */ - ret = btrfs_start_workers(&fs_info->workers); - ret |= btrfs_start_workers(&fs_info->generic_worker); + ret = btrfs_start_workers(&fs_info->generic_worker); ret |= btrfs_start_workers(&fs_info->submit_workers); ret |= btrfs_start_workers(&fs_info->delalloc_workers); ret |= btrfs_start_workers(&fs_info->fixup_workers); @@ -2569,6 +2566,10 @@ int open_ctree(struct super_block *sb, err = -ENOMEM; goto fail_sb_buffer; } + if (!(fs_info->workers)) { + err = -ENOMEM; + goto fail_sb_buffer; + } fs_info->bdi.ra_pages *= btrfs_super_num_devices(disk_super); fs_info->bdi.ra_pages = max(fs_info->bdi.ra_pages, diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index 2d8ac1b..69eb58b 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -1245,7 +1245,7 @@ static void btrfs_resize_thread_pool(struct btrfs_fs_info *fs_info, old_pool_size, new_pool_size); btrfs_set_max_workers(&fs_info->generic_worker, new_pool_size); - btrfs_set_max_workers(&fs_info->workers, new_pool_size); + btrfs_workqueue_set_max(fs_info->workers, new_pool_size); btrfs_set_max_workers(&fs_info->delalloc_workers, new_pool_size); btrfs_set_max_workers(&fs_info->submit_workers, new_pool_size); btrfs_set_max_workers(&fs_info->caching_workers, new_pool_size); -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 06/18] btrfs: Replace fs_info->delalloc_workers with btrfs_workqueue
Much like the fs_info->workers, replace the fs_info->delalloc_workers use the same btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> --- Changelog: v1->v2: None v2->v3: None v3->v4: - Use the simplified btrfs_alloc_workqueue API. --- fs/btrfs/ctree.h | 2 +- fs/btrfs/disk-io.c | 12 ++++-------- fs/btrfs/inode.c | 18 ++++++++---------- fs/btrfs/super.c | 2 +- 4 files changed, 14 insertions(+), 20 deletions(-) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index b3093c3..a86c9a1 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1490,7 +1490,7 @@ struct btrfs_fs_info { */ struct btrfs_workers generic_worker; struct btrfs_workqueue_struct *workers; - struct btrfs_workers delalloc_workers; + struct btrfs_workqueue_struct *delalloc_workers; struct btrfs_workers flush_workers; struct btrfs_workers endio_workers; struct btrfs_workers endio_meta_workers; diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 258c59a..1098435 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -2008,7 +2008,7 @@ static void btrfs_stop_all_workers(struct btrfs_fs_info *fs_info) { btrfs_stop_workers(&fs_info->generic_worker); btrfs_stop_workers(&fs_info->fixup_workers); - btrfs_stop_workers(&fs_info->delalloc_workers); + btrfs_destroy_workqueue(fs_info->delalloc_workers); btrfs_destroy_workqueue(fs_info->workers); btrfs_stop_workers(&fs_info->endio_workers); btrfs_stop_workers(&fs_info->endio_meta_workers); @@ -2476,8 +2476,8 @@ int open_ctree(struct super_block *sb, btrfs_alloc_workqueue("worker", flags | WQ_HIGHPRI, max_active, 16); - btrfs_init_workers(&fs_info->delalloc_workers, "delalloc", - fs_info->thread_pool_size, NULL); + fs_info->delalloc_workers + btrfs_alloc_workqueue("delalloc", flags, max_active, 2); btrfs_init_workers(&fs_info->flush_workers, "flush_delalloc", fs_info->thread_pool_size, NULL); @@ -2495,9 +2495,6 @@ int open_ctree(struct super_block *sb, */ fs_info->submit_workers.idle_thresh = 64; - fs_info->delalloc_workers.idle_thresh = 2; - fs_info->delalloc_workers.ordered = 1; - btrfs_init_workers(&fs_info->fixup_workers, "fixup", 1, &fs_info->generic_worker); btrfs_init_workers(&fs_info->endio_workers, "endio", @@ -2548,7 +2545,6 @@ int open_ctree(struct super_block *sb, */ ret = btrfs_start_workers(&fs_info->generic_worker); ret |= btrfs_start_workers(&fs_info->submit_workers); - ret |= btrfs_start_workers(&fs_info->delalloc_workers); ret |= btrfs_start_workers(&fs_info->fixup_workers); ret |= btrfs_start_workers(&fs_info->endio_workers); ret |= btrfs_start_workers(&fs_info->endio_meta_workers); @@ -2566,7 +2562,7 @@ int open_ctree(struct super_block *sb, err = -ENOMEM; goto fail_sb_buffer; } - if (!(fs_info->workers)) { + if (!(fs_info->workers && fs_info->delalloc_workers)) { err = -ENOMEM; goto fail_sb_buffer; } diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index f1a7744..220db71 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -305,7 +305,7 @@ struct async_cow { u64 start; u64 end; struct list_head extents; - struct btrfs_work work; + struct btrfs_work_struct work; }; static noinline int add_async_extent(struct async_cow *cow, @@ -980,7 +980,7 @@ out_unlock: /* * work queue call back to started compression on a file and pages */ -static noinline void async_cow_start(struct btrfs_work *work) +static noinline void async_cow_start(struct btrfs_work_struct *work) { struct async_cow *async_cow; int num_added = 0; @@ -998,7 +998,7 @@ static noinline void async_cow_start(struct btrfs_work *work) /* * work queue call back to submit previously compressed pages */ -static noinline void async_cow_submit(struct btrfs_work *work) +static noinline void async_cow_submit(struct btrfs_work_struct *work) { struct async_cow *async_cow; struct btrfs_root *root; @@ -1019,7 +1019,7 @@ static noinline void async_cow_submit(struct btrfs_work *work) submit_compressed_extents(async_cow->inode, async_cow); } -static noinline void async_cow_free(struct btrfs_work *work) +static noinline void async_cow_free(struct btrfs_work_struct *work) { struct async_cow *async_cow; async_cow = container_of(work, struct async_cow, work); @@ -1056,17 +1056,15 @@ static int cow_file_range_async(struct inode *inode, struct page *locked_page, async_cow->end = cur_end; INIT_LIST_HEAD(&async_cow->extents); - async_cow->work.func = async_cow_start; - async_cow->work.ordered_func = async_cow_submit; - async_cow->work.ordered_free = async_cow_free; - async_cow->work.flags = 0; + btrfs_init_work(&async_cow->work, async_cow_start, + async_cow_submit, async_cow_free); nr_pages = (cur_end - start + PAGE_CACHE_SIZE) >> PAGE_CACHE_SHIFT; atomic_add(nr_pages, &root->fs_info->async_delalloc_pages); - btrfs_queue_worker(&root->fs_info->delalloc_workers, - &async_cow->work); + btrfs_queue_work(root->fs_info->delalloc_workers, + &async_cow->work); if (atomic_read(&root->fs_info->async_delalloc_pages) > limit) { wait_event(root->fs_info->async_submit_wait, diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index 69eb58b..875560e 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -1246,7 +1246,7 @@ static void btrfs_resize_thread_pool(struct btrfs_fs_info *fs_info, btrfs_set_max_workers(&fs_info->generic_worker, new_pool_size); btrfs_workqueue_set_max(fs_info->workers, new_pool_size); - btrfs_set_max_workers(&fs_info->delalloc_workers, new_pool_size); + btrfs_workqueue_set_max(fs_info->delalloc_workers, new_pool_size); btrfs_set_max_workers(&fs_info->submit_workers, new_pool_size); btrfs_set_max_workers(&fs_info->caching_workers, new_pool_size); btrfs_set_max_workers(&fs_info->fixup_workers, new_pool_size); -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 07/18] btrfs: Replace fs_info->submit_workers with btrfs_workqueue.
Much like the fs_info->workers, replace the fs_info->submit_workers use the same btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> --- Changelog: v1->v2: None v2->v3: None v3->v4: - Use the simplified btrfs_alloc_workqueue API. --- fs/btrfs/ctree.h | 2 +- fs/btrfs/disk-io.c | 17 +++++++++-------- fs/btrfs/super.c | 2 +- fs/btrfs/volumes.c | 11 ++++++----- fs/btrfs/volumes.h | 2 +- 5 files changed, 18 insertions(+), 16 deletions(-) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index a86c9a1..4411a2b 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1499,7 +1499,7 @@ struct btrfs_fs_info { struct btrfs_workers endio_meta_write_workers; struct btrfs_workers endio_write_workers; struct btrfs_workers endio_freespace_worker; - struct btrfs_workers submit_workers; + struct btrfs_workqueue_struct *submit_workers; struct btrfs_workers caching_workers; struct btrfs_workers readahead_workers; diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 1098435..cda9766 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -2017,7 +2017,7 @@ static void btrfs_stop_all_workers(struct btrfs_fs_info *fs_info) btrfs_stop_workers(&fs_info->endio_meta_write_workers); btrfs_stop_workers(&fs_info->endio_write_workers); btrfs_stop_workers(&fs_info->endio_freespace_worker); - btrfs_stop_workers(&fs_info->submit_workers); + btrfs_destroy_workqueue(fs_info->submit_workers); btrfs_stop_workers(&fs_info->delayed_workers); btrfs_stop_workers(&fs_info->caching_workers); btrfs_stop_workers(&fs_info->readahead_workers); @@ -2482,18 +2482,19 @@ int open_ctree(struct super_block *sb, btrfs_init_workers(&fs_info->flush_workers, "flush_delalloc", fs_info->thread_pool_size, NULL); - btrfs_init_workers(&fs_info->submit_workers, "submit", - min_t(u64, fs_devices->num_devices, - fs_info->thread_pool_size), NULL); btrfs_init_workers(&fs_info->caching_workers, "cache", fs_info->thread_pool_size, NULL); - /* a higher idle thresh on the submit workers makes it much more + /* + * a higher idle thresh on the submit workers makes it much more * likely that bios will be send down in a sane order to the * devices */ - fs_info->submit_workers.idle_thresh = 64; + fs_info->submit_workers + btrfs_alloc_workqueue("submit", flags, + min_t(u64, fs_devices->num_devices, + max_active), 64); btrfs_init_workers(&fs_info->fixup_workers, "fixup", 1, &fs_info->generic_worker); @@ -2544,7 +2545,6 @@ int open_ctree(struct super_block *sb, * return -ENOMEM if any of these fail. */ ret = btrfs_start_workers(&fs_info->generic_worker); - ret |= btrfs_start_workers(&fs_info->submit_workers); ret |= btrfs_start_workers(&fs_info->fixup_workers); ret |= btrfs_start_workers(&fs_info->endio_workers); ret |= btrfs_start_workers(&fs_info->endio_meta_workers); @@ -2562,7 +2562,8 @@ int open_ctree(struct super_block *sb, err = -ENOMEM; goto fail_sb_buffer; } - if (!(fs_info->workers && fs_info->delalloc_workers)) { + if (!(fs_info->workers && fs_info->delalloc_workers && + fs_info->submit_workers)) { err = -ENOMEM; goto fail_sb_buffer; } diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index 875560e..9f1d0a5 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -1247,7 +1247,7 @@ static void btrfs_resize_thread_pool(struct btrfs_fs_info *fs_info, btrfs_set_max_workers(&fs_info->generic_worker, new_pool_size); btrfs_workqueue_set_max(fs_info->workers, new_pool_size); btrfs_workqueue_set_max(fs_info->delalloc_workers, new_pool_size); - btrfs_set_max_workers(&fs_info->submit_workers, new_pool_size); + btrfs_workqueue_set_max(fs_info->submit_workers, new_pool_size); btrfs_set_max_workers(&fs_info->caching_workers, new_pool_size); btrfs_set_max_workers(&fs_info->fixup_workers, new_pool_size); btrfs_set_max_workers(&fs_info->endio_workers, new_pool_size); diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index c63ed39..e07bd64 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -415,7 +415,8 @@ loop_lock: device->running_pending = 1; spin_unlock(&device->io_lock); - btrfs_requeue_work(&device->work); + btrfs_queue_work(fs_info->submit_workers, + &device->work); goto done; } /* unplug every 64 requests just for good measure */ @@ -439,7 +440,7 @@ done: blk_finish_plug(&plug); } -static void pending_bios_fn(struct btrfs_work *work) +static void pending_bios_fn(struct btrfs_work_struct *work) { struct btrfs_device *device; @@ -5378,8 +5379,8 @@ static noinline void btrfs_schedule_bio(struct btrfs_root *root, spin_unlock(&device->io_lock); if (should_queue) - btrfs_queue_worker(&root->fs_info->submit_workers, - &device->work); + btrfs_queue_work(root->fs_info->submit_workers, + &device->work); } static int bio_size_ok(struct block_device *bdev, struct bio *bio, @@ -5653,7 +5654,7 @@ struct btrfs_device *btrfs_alloc_device(struct btrfs_fs_info *fs_info, else generate_random_uuid(dev->uuid); - dev->work.func = pending_bios_fn; + btrfs_init_work(&dev->work, pending_bios_fn, NULL, NULL); return dev; } diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h index 8b3cd14..60802e2 100644 --- a/fs/btrfs/volumes.h +++ b/fs/btrfs/volumes.h @@ -95,7 +95,7 @@ struct btrfs_device { /* per-device scrub information */ struct scrub_ctx *scrub_device; - struct btrfs_work work; + struct btrfs_work_struct work; struct rcu_head rcu; struct work_struct rcu_work; -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 08/18] btrfs: Replace fs_info->flush_workers with btrfs_workqueue.
Replace the fs_info->submit_workers with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> --- Changelog: v1->v2: None v2->v3: - Use the btrfs_workqueue_struct to replace submit_workers. v3->v4: - Use the simplified btrfs_alloc_workqueue API. --- fs/btrfs/ctree.h | 4 ++-- fs/btrfs/disk-io.c | 10 ++++------ fs/btrfs/inode.c | 8 ++++---- fs/btrfs/ordered-data.c | 13 +++++++------ fs/btrfs/ordered-data.h | 2 +- 5 files changed, 18 insertions(+), 19 deletions(-) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 4411a2b..097364d 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1491,7 +1491,7 @@ struct btrfs_fs_info { struct btrfs_workers generic_worker; struct btrfs_workqueue_struct *workers; struct btrfs_workqueue_struct *delalloc_workers; - struct btrfs_workers flush_workers; + struct btrfs_workqueue_struct *flush_workers; struct btrfs_workers endio_workers; struct btrfs_workers endio_meta_workers; struct btrfs_workers endio_raid56_workers; @@ -3622,7 +3622,7 @@ struct btrfs_delalloc_work { int delay_iput; struct completion completion; struct list_head list; - struct btrfs_work work; + struct btrfs_work_struct work; }; struct btrfs_delalloc_work *btrfs_alloc_delalloc_work(struct inode *inode, diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index cda9766..139960f 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -2021,7 +2021,7 @@ static void btrfs_stop_all_workers(struct btrfs_fs_info *fs_info) btrfs_stop_workers(&fs_info->delayed_workers); btrfs_stop_workers(&fs_info->caching_workers); btrfs_stop_workers(&fs_info->readahead_workers); - btrfs_stop_workers(&fs_info->flush_workers); + btrfs_destroy_workqueue(fs_info->flush_workers); btrfs_stop_workers(&fs_info->qgroup_rescan_workers); } @@ -2479,9 +2479,8 @@ int open_ctree(struct super_block *sb, fs_info->delalloc_workers btrfs_alloc_workqueue("delalloc", flags, max_active, 2); - btrfs_init_workers(&fs_info->flush_workers, "flush_delalloc", - fs_info->thread_pool_size, NULL); - + fs_info->flush_workers + btrfs_alloc_workqueue("flush_delalloc", flags, max_active, 0); btrfs_init_workers(&fs_info->caching_workers, "cache", fs_info->thread_pool_size, NULL); @@ -2556,14 +2555,13 @@ int open_ctree(struct super_block *sb, ret |= btrfs_start_workers(&fs_info->delayed_workers); ret |= btrfs_start_workers(&fs_info->caching_workers); ret |= btrfs_start_workers(&fs_info->readahead_workers); - ret |= btrfs_start_workers(&fs_info->flush_workers); ret |= btrfs_start_workers(&fs_info->qgroup_rescan_workers); if (ret) { err = -ENOMEM; goto fail_sb_buffer; } if (!(fs_info->workers && fs_info->delalloc_workers && - fs_info->submit_workers)) { + fs_info->submit_workers && fs_info->flush_workers)) { err = -ENOMEM; goto fail_sb_buffer; } diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 220db71..929f1ee 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -8163,7 +8163,7 @@ out_notrans: return ret; } -static void btrfs_run_delalloc_work(struct btrfs_work *work) +static void btrfs_run_delalloc_work(struct btrfs_work_struct *work) { struct btrfs_delalloc_work *delalloc_work; struct inode *inode; @@ -8201,7 +8201,7 @@ struct btrfs_delalloc_work *btrfs_alloc_delalloc_work(struct inode *inode, work->inode = inode; work->wait = wait; work->delay_iput = delay_iput; - work->work.func = btrfs_run_delalloc_work; + btrfs_init_work(&work->work, btrfs_run_delalloc_work, NULL, NULL); return work; } @@ -8253,8 +8253,8 @@ static int __start_delalloc_inodes(struct btrfs_root *root, int delay_iput) goto out; } list_add_tail(&work->list, &works); - btrfs_queue_worker(&root->fs_info->flush_workers, - &work->work); + btrfs_queue_work(root->fs_info->flush_workers, + &work->work); cond_resched(); spin_lock(&root->delalloc_lock); diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c index 69582d5..e0c3cf0 100644 --- a/fs/btrfs/ordered-data.c +++ b/fs/btrfs/ordered-data.c @@ -552,7 +552,7 @@ void btrfs_remove_ordered_extent(struct inode *inode, wake_up(&entry->wait); } -static void btrfs_run_ordered_extent_work(struct btrfs_work *work) +static void btrfs_run_ordered_extent_work(struct btrfs_work_struct *work) { struct btrfs_ordered_extent *ordered; @@ -585,10 +585,11 @@ int btrfs_wait_ordered_extents(struct btrfs_root *root, int nr) atomic_inc(&ordered->refs); spin_unlock(&root->ordered_extent_lock); - ordered->flush_work.func = btrfs_run_ordered_extent_work; + btrfs_init_work(&ordered->flush_work, + btrfs_run_ordered_extent_work, NULL, NULL); list_add_tail(&ordered->work_list, &works); - btrfs_queue_worker(&root->fs_info->flush_workers, - &ordered->flush_work); + btrfs_queue_work(root->fs_info->flush_workers, + &ordered->flush_work); cond_resched(); spin_lock(&root->ordered_extent_lock); @@ -701,8 +702,8 @@ int btrfs_run_ordered_operations(struct btrfs_trans_handle *trans, goto out; } list_add_tail(&work->list, &works); - btrfs_queue_worker(&root->fs_info->flush_workers, - &work->work); + btrfs_queue_work(root->fs_info->flush_workers, + &work->work); cond_resched(); spin_lock(&root->fs_info->ordered_root_lock); diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h index 9b0450f..59b7074 100644 --- a/fs/btrfs/ordered-data.h +++ b/fs/btrfs/ordered-data.h @@ -133,7 +133,7 @@ struct btrfs_ordered_extent { struct btrfs_work work; struct completion completion; - struct btrfs_work flush_work; + struct btrfs_work_struct flush_work; struct list_head work_list; }; -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 09/18] btrfs: Replace fs_info->endio_* workqueue with btrfs_workqueue.
Replace the fs_info->endio_* workqueues with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> --- Changelog: v1->v2: None v2->v3: - Use the btrfs_workqueue_struct to replace endio_*. v3->v4: - Use the simplified btrfs_alloc_workqueue API. --- fs/btrfs/ctree.h | 12 +++--- fs/btrfs/disk-io.c | 104 +++++++++++++++++++++--------------------------- fs/btrfs/inode.c | 20 +++++----- fs/btrfs/ordered-data.h | 2 +- fs/btrfs/super.c | 11 ++--- 5 files changed, 68 insertions(+), 81 deletions(-) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 097364d..5096164 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1492,13 +1492,13 @@ struct btrfs_fs_info { struct btrfs_workqueue_struct *workers; struct btrfs_workqueue_struct *delalloc_workers; struct btrfs_workqueue_struct *flush_workers; - struct btrfs_workers endio_workers; - struct btrfs_workers endio_meta_workers; - struct btrfs_workers endio_raid56_workers; + struct btrfs_workqueue_struct *endio_workers; + struct btrfs_workqueue_struct *endio_meta_workers; + struct btrfs_workqueue_struct *endio_raid56_workers; struct btrfs_workers rmw_workers; - struct btrfs_workers endio_meta_write_workers; - struct btrfs_workers endio_write_workers; - struct btrfs_workers endio_freespace_worker; + struct btrfs_workqueue_struct *endio_meta_write_workers; + struct btrfs_workqueue_struct *endio_write_workers; + struct btrfs_workqueue_struct *endio_freespace_worker; struct btrfs_workqueue_struct *submit_workers; struct btrfs_workers caching_workers; struct btrfs_workers readahead_workers; diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 139960f..4f8591a 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -54,7 +54,7 @@ #endif static struct extent_io_ops btree_extent_io_ops; -static void end_workqueue_fn(struct btrfs_work *work); +static void end_workqueue_fn(struct btrfs_work_struct *work); static void free_fs_root(struct btrfs_root *root); static int btrfs_check_super_valid(struct btrfs_fs_info *fs_info, int read_only); @@ -85,7 +85,7 @@ struct end_io_wq { int error; int metadata; struct list_head list; - struct btrfs_work work; + struct btrfs_work_struct work; }; /* @@ -681,32 +681,31 @@ static void end_workqueue_bio(struct bio *bio, int err) fs_info = end_io_wq->info; end_io_wq->error = err; - end_io_wq->work.func = end_workqueue_fn; - end_io_wq->work.flags = 0; + btrfs_init_work(&end_io_wq->work, end_workqueue_fn, NULL, NULL); if (bio->bi_rw & REQ_WRITE) { if (end_io_wq->metadata == BTRFS_WQ_ENDIO_METADATA) - btrfs_queue_worker(&fs_info->endio_meta_write_workers, - &end_io_wq->work); + btrfs_queue_work(fs_info->endio_meta_write_workers, + &end_io_wq->work); else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_FREE_SPACE) - btrfs_queue_worker(&fs_info->endio_freespace_worker, - &end_io_wq->work); + btrfs_queue_work(fs_info->endio_freespace_worker, + &end_io_wq->work); else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_RAID56) - btrfs_queue_worker(&fs_info->endio_raid56_workers, - &end_io_wq->work); + btrfs_queue_work(fs_info->endio_raid56_workers, + &end_io_wq->work); else - btrfs_queue_worker(&fs_info->endio_write_workers, - &end_io_wq->work); + btrfs_queue_work(fs_info->endio_write_workers, + &end_io_wq->work); } else { if (end_io_wq->metadata == BTRFS_WQ_ENDIO_RAID56) - btrfs_queue_worker(&fs_info->endio_raid56_workers, - &end_io_wq->work); + btrfs_queue_work(fs_info->endio_raid56_workers, + &end_io_wq->work); else if (end_io_wq->metadata) - btrfs_queue_worker(&fs_info->endio_meta_workers, - &end_io_wq->work); + btrfs_queue_work(fs_info->endio_meta_workers, + &end_io_wq->work); else - btrfs_queue_worker(&fs_info->endio_workers, - &end_io_wq->work); + btrfs_queue_work(fs_info->endio_workers, + &end_io_wq->work); } } @@ -1678,7 +1677,7 @@ static int setup_bdi(struct btrfs_fs_info *info, struct backing_dev_info *bdi) * called by the kthread helper functions to finally call the bio end_io * functions. This is where read checksum verification actually happens */ -static void end_workqueue_fn(struct btrfs_work *work) +static void end_workqueue_fn(struct btrfs_work_struct *work) { struct bio *bio; struct end_io_wq *end_io_wq; @@ -2010,13 +2009,13 @@ static void btrfs_stop_all_workers(struct btrfs_fs_info *fs_info) btrfs_stop_workers(&fs_info->fixup_workers); btrfs_destroy_workqueue(fs_info->delalloc_workers); btrfs_destroy_workqueue(fs_info->workers); - btrfs_stop_workers(&fs_info->endio_workers); - btrfs_stop_workers(&fs_info->endio_meta_workers); - btrfs_stop_workers(&fs_info->endio_raid56_workers); + btrfs_destroy_workqueue(fs_info->endio_workers); + btrfs_destroy_workqueue(fs_info->endio_meta_workers); + btrfs_destroy_workqueue(fs_info->endio_raid56_workers); btrfs_stop_workers(&fs_info->rmw_workers); - btrfs_stop_workers(&fs_info->endio_meta_write_workers); - btrfs_stop_workers(&fs_info->endio_write_workers); - btrfs_stop_workers(&fs_info->endio_freespace_worker); + btrfs_destroy_workqueue(fs_info->endio_meta_write_workers); + btrfs_destroy_workqueue(fs_info->endio_write_workers); + btrfs_destroy_workqueue(fs_info->endio_freespace_worker); btrfs_destroy_workqueue(fs_info->submit_workers); btrfs_stop_workers(&fs_info->delayed_workers); btrfs_stop_workers(&fs_info->caching_workers); @@ -2497,26 +2496,26 @@ int open_ctree(struct super_block *sb, btrfs_init_workers(&fs_info->fixup_workers, "fixup", 1, &fs_info->generic_worker); - btrfs_init_workers(&fs_info->endio_workers, "endio", - fs_info->thread_pool_size, - &fs_info->generic_worker); - btrfs_init_workers(&fs_info->endio_meta_workers, "endio-meta", - fs_info->thread_pool_size, - &fs_info->generic_worker); - btrfs_init_workers(&fs_info->endio_meta_write_workers, - "endio-meta-write", fs_info->thread_pool_size, - &fs_info->generic_worker); - btrfs_init_workers(&fs_info->endio_raid56_workers, - "endio-raid56", fs_info->thread_pool_size, - &fs_info->generic_worker); + + /* + * endios are largely parallel and should have a very + * low idle thresh + */ + fs_info->endio_workers + btrfs_alloc_workqueue("endio", flags, max_active, 4); + fs_info->endio_meta_workers + btrfs_alloc_workqueue("endio-meta", flags, max_active, 4); + fs_info->endio_meta_write_workers + btrfs_alloc_workqueue("endio-meta-write", flags, max_active, 2); + fs_info->endio_raid56_workers + btrfs_alloc_workqueue("endio-raid56", flags, max_active, 4); btrfs_init_workers(&fs_info->rmw_workers, "rmw", fs_info->thread_pool_size, &fs_info->generic_worker); - btrfs_init_workers(&fs_info->endio_write_workers, "endio-write", - fs_info->thread_pool_size, - &fs_info->generic_worker); - btrfs_init_workers(&fs_info->endio_freespace_worker, "freespace-write", - 1, &fs_info->generic_worker); + fs_info->endio_write_workers + btrfs_alloc_workqueue("endio-write", flags, max_active, 2); + fs_info->endio_freespace_worker + btrfs_alloc_workqueue("freespace-write", flags, max_active, 0); btrfs_init_workers(&fs_info->delayed_workers, "delayed-meta", fs_info->thread_pool_size, &fs_info->generic_worker); @@ -2526,17 +2525,8 @@ int open_ctree(struct super_block *sb, btrfs_init_workers(&fs_info->qgroup_rescan_workers, "qgroup-rescan", 1, &fs_info->generic_worker); - /* - * endios are largely parallel and should have a very - * low idle thresh - */ - fs_info->endio_workers.idle_thresh = 4; - fs_info->endio_meta_workers.idle_thresh = 4; - fs_info->endio_raid56_workers.idle_thresh = 4; fs_info->rmw_workers.idle_thresh = 2; - fs_info->endio_write_workers.idle_thresh = 2; - fs_info->endio_meta_write_workers.idle_thresh = 2; fs_info->readahead_workers.idle_thresh = 2; /* @@ -2545,13 +2535,7 @@ int open_ctree(struct super_block *sb, */ ret = btrfs_start_workers(&fs_info->generic_worker); ret |= btrfs_start_workers(&fs_info->fixup_workers); - ret |= btrfs_start_workers(&fs_info->endio_workers); - ret |= btrfs_start_workers(&fs_info->endio_meta_workers); ret |= btrfs_start_workers(&fs_info->rmw_workers); - ret |= btrfs_start_workers(&fs_info->endio_raid56_workers); - ret |= btrfs_start_workers(&fs_info->endio_meta_write_workers); - ret |= btrfs_start_workers(&fs_info->endio_write_workers); - ret |= btrfs_start_workers(&fs_info->endio_freespace_worker); ret |= btrfs_start_workers(&fs_info->delayed_workers); ret |= btrfs_start_workers(&fs_info->caching_workers); ret |= btrfs_start_workers(&fs_info->readahead_workers); @@ -2561,7 +2545,11 @@ int open_ctree(struct super_block *sb, goto fail_sb_buffer; } if (!(fs_info->workers && fs_info->delalloc_workers && - fs_info->submit_workers && fs_info->flush_workers)) { + fs_info->submit_workers && fs_info->flush_workers && + fs_info->endio_workers && fs_info->endio_meta_workers && + fs_info->endio_meta_write_workers && + fs_info->endio_write_workers && fs_info->endio_raid56_workers && + fs_info->endio_freespace_worker)) { err = -ENOMEM; goto fail_sb_buffer; } diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 929f1ee..d4f8dfb 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -2725,7 +2725,7 @@ out: return ret; } -static void finish_ordered_fn(struct btrfs_work *work) +static void finish_ordered_fn(struct btrfs_work_struct *work) { struct btrfs_ordered_extent *ordered_extent; ordered_extent = container_of(work, struct btrfs_ordered_extent, work); @@ -2738,7 +2738,7 @@ static int btrfs_writepage_end_io_hook(struct page *page, u64 start, u64 end, struct inode *inode = page->mapping->host; struct btrfs_root *root = BTRFS_I(inode)->root; struct btrfs_ordered_extent *ordered_extent = NULL; - struct btrfs_workers *workers; + struct btrfs_workqueue_struct *workers; trace_btrfs_writepage_end_io_hook(page, start, end, uptodate); @@ -2747,14 +2747,13 @@ static int btrfs_writepage_end_io_hook(struct page *page, u64 start, u64 end, end - start + 1, uptodate)) return 0; - ordered_extent->work.func = finish_ordered_fn; - ordered_extent->work.flags = 0; + btrfs_init_work(&ordered_extent->work, finish_ordered_fn, NULL, NULL); if (btrfs_is_free_space_inode(inode)) - workers = &root->fs_info->endio_freespace_worker; + workers = root->fs_info->endio_freespace_worker; else - workers = &root->fs_info->endio_write_workers; - btrfs_queue_worker(workers, &ordered_extent->work); + workers = root->fs_info->endio_write_workers; + btrfs_queue_work(workers, &ordered_extent->work); return 0; } @@ -6849,10 +6848,9 @@ again: if (!ret) goto out_test; - ordered->work.func = finish_ordered_fn; - ordered->work.flags = 0; - btrfs_queue_worker(&root->fs_info->endio_write_workers, - &ordered->work); + btrfs_init_work(&ordered->work, finish_ordered_fn, NULL, NULL); + btrfs_queue_work(root->fs_info->endio_write_workers, + &ordered->work); out_test: /* * our bio might span multiple ordered extents. If we haven''t diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h index 59b7074..9d88f37 100644 --- a/fs/btrfs/ordered-data.h +++ b/fs/btrfs/ordered-data.h @@ -130,7 +130,7 @@ struct btrfs_ordered_extent { /* a per root list of all the pending ordered extents */ struct list_head root_extent_list; - struct btrfs_work work; + struct btrfs_work_struct work; struct completion completion; struct btrfs_work_struct flush_work; diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index 9f1d0a5..da3ec84 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -1250,11 +1250,12 @@ static void btrfs_resize_thread_pool(struct btrfs_fs_info *fs_info, btrfs_workqueue_set_max(fs_info->submit_workers, new_pool_size); btrfs_set_max_workers(&fs_info->caching_workers, new_pool_size); btrfs_set_max_workers(&fs_info->fixup_workers, new_pool_size); - btrfs_set_max_workers(&fs_info->endio_workers, new_pool_size); - btrfs_set_max_workers(&fs_info->endio_meta_workers, new_pool_size); - btrfs_set_max_workers(&fs_info->endio_meta_write_workers, new_pool_size); - btrfs_set_max_workers(&fs_info->endio_write_workers, new_pool_size); - btrfs_set_max_workers(&fs_info->endio_freespace_worker, new_pool_size); + btrfs_workqueue_set_max(fs_info->endio_workers, new_pool_size); + btrfs_workqueue_set_max(fs_info->endio_meta_workers, new_pool_size); + btrfs_workqueue_set_max(fs_info->endio_meta_write_workers, + new_pool_size); + btrfs_workqueue_set_max(fs_info->endio_write_workers, new_pool_size); + btrfs_workqueue_set_max(fs_info->endio_freespace_worker, new_pool_size); btrfs_set_max_workers(&fs_info->delayed_workers, new_pool_size); btrfs_set_max_workers(&fs_info->readahead_workers, new_pool_size); btrfs_set_max_workers(&fs_info->scrub_wr_completion_workers, -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 10/18] btrfs: Replace fs_info->rmw_workers workqueue with btrfs_workqueue.
Replace the fs_info->rmw_workers with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> --- Changelog: v1->v2: None v2->v3: - Use the btrfs_workqueue_struct to replace rmw_workers. v3->v4: - Use the simplified btrfs_alloc_workqueue API. --- fs/btrfs/ctree.h | 2 +- fs/btrfs/disk-io.c | 12 ++++-------- fs/btrfs/raid56.c | 35 ++++++++++++++++------------------- 3 files changed, 21 insertions(+), 28 deletions(-) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 5096164..294b373 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1495,7 +1495,7 @@ struct btrfs_fs_info { struct btrfs_workqueue_struct *endio_workers; struct btrfs_workqueue_struct *endio_meta_workers; struct btrfs_workqueue_struct *endio_raid56_workers; - struct btrfs_workers rmw_workers; + struct btrfs_workqueue_struct *rmw_workers; struct btrfs_workqueue_struct *endio_meta_write_workers; struct btrfs_workqueue_struct *endio_write_workers; struct btrfs_workqueue_struct *endio_freespace_worker; diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 4f8591a..8b2977b 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -2012,7 +2012,7 @@ static void btrfs_stop_all_workers(struct btrfs_fs_info *fs_info) btrfs_destroy_workqueue(fs_info->endio_workers); btrfs_destroy_workqueue(fs_info->endio_meta_workers); btrfs_destroy_workqueue(fs_info->endio_raid56_workers); - btrfs_stop_workers(&fs_info->rmw_workers); + btrfs_destroy_workqueue(fs_info->rmw_workers); btrfs_destroy_workqueue(fs_info->endio_meta_write_workers); btrfs_destroy_workqueue(fs_info->endio_write_workers); btrfs_destroy_workqueue(fs_info->endio_freespace_worker); @@ -2509,9 +2509,8 @@ int open_ctree(struct super_block *sb, btrfs_alloc_workqueue("endio-meta-write", flags, max_active, 2); fs_info->endio_raid56_workers btrfs_alloc_workqueue("endio-raid56", flags, max_active, 4); - btrfs_init_workers(&fs_info->rmw_workers, - "rmw", fs_info->thread_pool_size, - &fs_info->generic_worker); + fs_info->rmw_workers + btrfs_alloc_workqueue("rmw", flags, max_active, 2); fs_info->endio_write_workers btrfs_alloc_workqueue("endio-write", flags, max_active, 2); fs_info->endio_freespace_worker @@ -2525,8 +2524,6 @@ int open_ctree(struct super_block *sb, btrfs_init_workers(&fs_info->qgroup_rescan_workers, "qgroup-rescan", 1, &fs_info->generic_worker); - fs_info->rmw_workers.idle_thresh = 2; - fs_info->readahead_workers.idle_thresh = 2; /* @@ -2535,7 +2532,6 @@ int open_ctree(struct super_block *sb, */ ret = btrfs_start_workers(&fs_info->generic_worker); ret |= btrfs_start_workers(&fs_info->fixup_workers); - ret |= btrfs_start_workers(&fs_info->rmw_workers); ret |= btrfs_start_workers(&fs_info->delayed_workers); ret |= btrfs_start_workers(&fs_info->caching_workers); ret |= btrfs_start_workers(&fs_info->readahead_workers); @@ -2549,7 +2545,7 @@ int open_ctree(struct super_block *sb, fs_info->endio_workers && fs_info->endio_meta_workers && fs_info->endio_meta_write_workers && fs_info->endio_write_workers && fs_info->endio_raid56_workers && - fs_info->endio_freespace_worker)) { + fs_info->endio_freespace_worker && fs_info->rmw_workers)) { err = -ENOMEM; goto fail_sb_buffer; } diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c index 24ac218..5afa564 100644 --- a/fs/btrfs/raid56.c +++ b/fs/btrfs/raid56.c @@ -87,7 +87,7 @@ struct btrfs_raid_bio { /* * for scheduling work in the helper threads */ - struct btrfs_work work; + struct btrfs_work_struct work; /* * bio list and bio_list_lock are used @@ -166,8 +166,8 @@ struct btrfs_raid_bio { static int __raid56_parity_recover(struct btrfs_raid_bio *rbio); static noinline void finish_rmw(struct btrfs_raid_bio *rbio); -static void rmw_work(struct btrfs_work *work); -static void read_rebuild_work(struct btrfs_work *work); +static void rmw_work(struct btrfs_work_struct *work); +static void read_rebuild_work(struct btrfs_work_struct *work); static void async_rmw_stripe(struct btrfs_raid_bio *rbio); static void async_read_rebuild(struct btrfs_raid_bio *rbio); static int fail_bio_stripe(struct btrfs_raid_bio *rbio, struct bio *bio); @@ -1416,20 +1416,18 @@ cleanup: static void async_rmw_stripe(struct btrfs_raid_bio *rbio) { - rbio->work.flags = 0; - rbio->work.func = rmw_work; + btrfs_init_work(&rbio->work, rmw_work, NULL, NULL); - btrfs_queue_worker(&rbio->fs_info->rmw_workers, - &rbio->work); + btrfs_queue_work(rbio->fs_info->rmw_workers, + &rbio->work); } static void async_read_rebuild(struct btrfs_raid_bio *rbio) { - rbio->work.flags = 0; - rbio->work.func = read_rebuild_work; + btrfs_init_work(&rbio->work, read_rebuild_work, NULL, NULL); - btrfs_queue_worker(&rbio->fs_info->rmw_workers, - &rbio->work); + btrfs_queue_work(rbio->fs_info->rmw_workers, + &rbio->work); } /* @@ -1590,7 +1588,7 @@ struct btrfs_plug_cb { struct blk_plug_cb cb; struct btrfs_fs_info *info; struct list_head rbio_list; - struct btrfs_work work; + struct btrfs_work_struct work; }; /* @@ -1654,7 +1652,7 @@ static void run_plug(struct btrfs_plug_cb *plug) * if the unplug comes from schedule, we have to push the * work off to a helper thread */ -static void unplug_work(struct btrfs_work *work) +static void unplug_work(struct btrfs_work_struct *work) { struct btrfs_plug_cb *plug; plug = container_of(work, struct btrfs_plug_cb, work); @@ -1667,10 +1665,9 @@ static void btrfs_raid_unplug(struct blk_plug_cb *cb, bool from_schedule) plug = container_of(cb, struct btrfs_plug_cb, cb); if (from_schedule) { - plug->work.flags = 0; - plug->work.func = unplug_work; - btrfs_queue_worker(&plug->info->rmw_workers, - &plug->work); + btrfs_init_work(&plug->work, unplug_work, NULL, NULL); + btrfs_queue_work(plug->info->rmw_workers, + &plug->work); return; } run_plug(plug); @@ -2082,7 +2079,7 @@ int raid56_parity_recover(struct btrfs_root *root, struct bio *bio, } -static void rmw_work(struct btrfs_work *work) +static void rmw_work(struct btrfs_work_struct *work) { struct btrfs_raid_bio *rbio; @@ -2090,7 +2087,7 @@ static void rmw_work(struct btrfs_work *work) raid56_rmw_stripe(rbio); } -static void read_rebuild_work(struct btrfs_work *work) +static void read_rebuild_work(struct btrfs_work_struct *work) { struct btrfs_raid_bio *rbio; -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 11/18] btrfs: Replace fs_info->cache_workers workqueue with btrfs_workqueue.
Replace the fs_info->cache_workers with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> --- Changelog: v1->v2: None v2->v3: - Use the btrfs_workqueue_struct to replace caching_workers. v3->v4: - Use the simplified btrfs_alloc_workqueue API. --- fs/btrfs/ctree.h | 4 ++-- fs/btrfs/disk-io.c | 10 +++++----- fs/btrfs/extent-tree.c | 6 +++--- fs/btrfs/super.c | 2 +- 4 files changed, 11 insertions(+), 11 deletions(-) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 294b373..8630986 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1205,7 +1205,7 @@ struct btrfs_caching_control { struct list_head list; struct mutex mutex; wait_queue_head_t wait; - struct btrfs_work work; + struct btrfs_work_struct work; struct btrfs_block_group_cache *block_group; u64 progress; atomic_t count; @@ -1500,7 +1500,7 @@ struct btrfs_fs_info { struct btrfs_workqueue_struct *endio_write_workers; struct btrfs_workqueue_struct *endio_freespace_worker; struct btrfs_workqueue_struct *submit_workers; - struct btrfs_workers caching_workers; + struct btrfs_workqueue_struct *caching_workers; struct btrfs_workers readahead_workers; /* diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 8b2977b..d8f42d2 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -2018,7 +2018,7 @@ static void btrfs_stop_all_workers(struct btrfs_fs_info *fs_info) btrfs_destroy_workqueue(fs_info->endio_freespace_worker); btrfs_destroy_workqueue(fs_info->submit_workers); btrfs_stop_workers(&fs_info->delayed_workers); - btrfs_stop_workers(&fs_info->caching_workers); + btrfs_destroy_workqueue(fs_info->caching_workers); btrfs_stop_workers(&fs_info->readahead_workers); btrfs_destroy_workqueue(fs_info->flush_workers); btrfs_stop_workers(&fs_info->qgroup_rescan_workers); @@ -2481,8 +2481,8 @@ int open_ctree(struct super_block *sb, fs_info->flush_workers btrfs_alloc_workqueue("flush_delalloc", flags, max_active, 0); - btrfs_init_workers(&fs_info->caching_workers, "cache", - fs_info->thread_pool_size, NULL); + fs_info->caching_workers + btrfs_alloc_workqueue("cache", flags, max_active, 0); /* * a higher idle thresh on the submit workers makes it much more @@ -2533,7 +2533,6 @@ int open_ctree(struct super_block *sb, ret = btrfs_start_workers(&fs_info->generic_worker); ret |= btrfs_start_workers(&fs_info->fixup_workers); ret |= btrfs_start_workers(&fs_info->delayed_workers); - ret |= btrfs_start_workers(&fs_info->caching_workers); ret |= btrfs_start_workers(&fs_info->readahead_workers); ret |= btrfs_start_workers(&fs_info->qgroup_rescan_workers); if (ret) { @@ -2545,7 +2544,8 @@ int open_ctree(struct super_block *sb, fs_info->endio_workers && fs_info->endio_meta_workers && fs_info->endio_meta_write_workers && fs_info->endio_write_workers && fs_info->endio_raid56_workers && - fs_info->endio_freespace_worker && fs_info->rmw_workers)) { + fs_info->endio_freespace_worker && fs_info->rmw_workers && + fs_info->caching_workers)) { err = -ENOMEM; goto fail_sb_buffer; } diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index 45d98d0..80ecc14 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -377,7 +377,7 @@ static u64 add_new_free_space(struct btrfs_block_group_cache *block_group, return total_added; } -static noinline void caching_thread(struct btrfs_work *work) +static noinline void caching_thread(struct btrfs_work_struct *work) { struct btrfs_block_group_cache *block_group; struct btrfs_fs_info *fs_info; @@ -547,7 +547,7 @@ static int cache_block_group(struct btrfs_block_group_cache *cache, caching_ctl->block_group = cache; caching_ctl->progress = cache->key.objectid; atomic_set(&caching_ctl->count, 1); - caching_ctl->work.func = caching_thread; + btrfs_init_work(&caching_ctl->work, caching_thread, NULL, NULL); spin_lock(&cache->lock); /* @@ -638,7 +638,7 @@ static int cache_block_group(struct btrfs_block_group_cache *cache, btrfs_get_block_group(cache); - btrfs_queue_worker(&fs_info->caching_workers, &caching_ctl->work); + btrfs_queue_work(fs_info->caching_workers, &caching_ctl->work); return ret; } diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index da3ec84..5bfe566 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -1248,7 +1248,7 @@ static void btrfs_resize_thread_pool(struct btrfs_fs_info *fs_info, btrfs_workqueue_set_max(fs_info->workers, new_pool_size); btrfs_workqueue_set_max(fs_info->delalloc_workers, new_pool_size); btrfs_workqueue_set_max(fs_info->submit_workers, new_pool_size); - btrfs_set_max_workers(&fs_info->caching_workers, new_pool_size); + btrfs_workqueue_set_max(fs_info->caching_workers, new_pool_size); btrfs_set_max_workers(&fs_info->fixup_workers, new_pool_size); btrfs_workqueue_set_max(fs_info->endio_workers, new_pool_size); btrfs_workqueue_set_max(fs_info->endio_meta_workers, new_pool_size); -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 12/18] btrfs: Replace fs_info->readahead_workers workqueue with btrfs_workqueue.
Replace the fs_info->readahead_workers with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> --- Changelog: v1->v2: None v2->v3: - Use the btrfs_workqueue_struct to replace readahead_workers. v3->v4: - Use the simplified btrfs_alloc_workqueue API. --- fs/btrfs/ctree.h | 2 +- fs/btrfs/disk-io.c | 12 ++++-------- fs/btrfs/reada.c | 9 +++++---- fs/btrfs/super.c | 2 +- 4 files changed, 11 insertions(+), 14 deletions(-) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 8630986..302dc46 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1501,7 +1501,7 @@ struct btrfs_fs_info { struct btrfs_workqueue_struct *endio_freespace_worker; struct btrfs_workqueue_struct *submit_workers; struct btrfs_workqueue_struct *caching_workers; - struct btrfs_workers readahead_workers; + struct btrfs_workqueue_struct *readahead_workers; /* * fixup workers take dirty pages that didn''t properly go through diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index d8f42d2..4d49d87 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -2019,7 +2019,7 @@ static void btrfs_stop_all_workers(struct btrfs_fs_info *fs_info) btrfs_destroy_workqueue(fs_info->submit_workers); btrfs_stop_workers(&fs_info->delayed_workers); btrfs_destroy_workqueue(fs_info->caching_workers); - btrfs_stop_workers(&fs_info->readahead_workers); + btrfs_destroy_workqueue(fs_info->readahead_workers); btrfs_destroy_workqueue(fs_info->flush_workers); btrfs_stop_workers(&fs_info->qgroup_rescan_workers); } @@ -2518,14 +2518,11 @@ int open_ctree(struct super_block *sb, btrfs_init_workers(&fs_info->delayed_workers, "delayed-meta", fs_info->thread_pool_size, &fs_info->generic_worker); - btrfs_init_workers(&fs_info->readahead_workers, "readahead", - fs_info->thread_pool_size, - &fs_info->generic_worker); + fs_info->readahead_workers + btrfs_alloc_workqueue("readahead", flags, max_active, 2); btrfs_init_workers(&fs_info->qgroup_rescan_workers, "qgroup-rescan", 1, &fs_info->generic_worker); - fs_info->readahead_workers.idle_thresh = 2; - /* * btrfs_start_workers can really only fail because of ENOMEM so just * return -ENOMEM if any of these fail. @@ -2533,7 +2530,6 @@ int open_ctree(struct super_block *sb, ret = btrfs_start_workers(&fs_info->generic_worker); ret |= btrfs_start_workers(&fs_info->fixup_workers); ret |= btrfs_start_workers(&fs_info->delayed_workers); - ret |= btrfs_start_workers(&fs_info->readahead_workers); ret |= btrfs_start_workers(&fs_info->qgroup_rescan_workers); if (ret) { err = -ENOMEM; @@ -2545,7 +2541,7 @@ int open_ctree(struct super_block *sb, fs_info->endio_meta_write_workers && fs_info->endio_write_workers && fs_info->endio_raid56_workers && fs_info->endio_freespace_worker && fs_info->rmw_workers && - fs_info->caching_workers)) { + fs_info->caching_workers && fs_info->readahead_workers)) { err = -ENOMEM; goto fail_sb_buffer; } diff --git a/fs/btrfs/reada.c b/fs/btrfs/reada.c index 1031b69..854b69a 100644 --- a/fs/btrfs/reada.c +++ b/fs/btrfs/reada.c @@ -91,7 +91,8 @@ struct reada_zone { }; struct reada_machine_work { - struct btrfs_work work; + struct btrfs_work_struct + work; struct btrfs_fs_info *fs_info; }; @@ -732,7 +733,7 @@ static int reada_start_machine_dev(struct btrfs_fs_info *fs_info, } -static void reada_start_machine_worker(struct btrfs_work *work) +static void reada_start_machine_worker(struct btrfs_work_struct *work) { struct reada_machine_work *rmw; struct btrfs_fs_info *fs_info; @@ -792,10 +793,10 @@ static void reada_start_machine(struct btrfs_fs_info *fs_info) /* FIXME we cannot handle this properly right now */ BUG(); } - rmw->work.func = reada_start_machine_worker; + btrfs_init_work(&rmw->work, reada_start_machine_worker, NULL, NULL); rmw->fs_info = fs_info; - btrfs_queue_worker(&fs_info->readahead_workers, &rmw->work); + btrfs_queue_work(fs_info->readahead_workers, &rmw->work); } #ifdef DEBUG diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index 5bfe566..7a46e23 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -1257,7 +1257,7 @@ static void btrfs_resize_thread_pool(struct btrfs_fs_info *fs_info, btrfs_workqueue_set_max(fs_info->endio_write_workers, new_pool_size); btrfs_workqueue_set_max(fs_info->endio_freespace_worker, new_pool_size); btrfs_set_max_workers(&fs_info->delayed_workers, new_pool_size); - btrfs_set_max_workers(&fs_info->readahead_workers, new_pool_size); + btrfs_workqueue_set_max(fs_info->readahead_workers, new_pool_size); btrfs_set_max_workers(&fs_info->scrub_wr_completion_workers, new_pool_size); } -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 13/18] btrfs: Replace fs_info->fixup_workers workqueue with btrfs_workqueue.
Replace the fs_info->fixup_workers with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> --- Changelog: v1->v2: None v2->v3: - Use the btrfs_workqueue_struct to replace fixup_workers. v3->v4: - Use the simplified btrfs_alloc_workqueue API. --- fs/btrfs/ctree.h | 2 +- fs/btrfs/disk-io.c | 10 +++++----- fs/btrfs/inode.c | 8 ++++---- fs/btrfs/super.c | 1 - 4 files changed, 10 insertions(+), 11 deletions(-) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 302dc46..845615e 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1508,7 +1508,7 @@ struct btrfs_fs_info { * the cow mechanism and make them safe to write. It happens * for the sys_munmap function call path */ - struct btrfs_workers fixup_workers; + struct btrfs_workqueue_struct *fixup_workers; struct btrfs_workers delayed_workers; struct task_struct *transaction_kthread; struct task_struct *cleaner_kthread; diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 4d49d87..e5dec5a 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -2006,7 +2006,7 @@ static noinline int next_root_backup(struct btrfs_fs_info *info, static void btrfs_stop_all_workers(struct btrfs_fs_info *fs_info) { btrfs_stop_workers(&fs_info->generic_worker); - btrfs_stop_workers(&fs_info->fixup_workers); + btrfs_destroy_workqueue(fs_info->fixup_workers); btrfs_destroy_workqueue(fs_info->delalloc_workers); btrfs_destroy_workqueue(fs_info->workers); btrfs_destroy_workqueue(fs_info->endio_workers); @@ -2494,8 +2494,8 @@ int open_ctree(struct super_block *sb, min_t(u64, fs_devices->num_devices, max_active), 64); - btrfs_init_workers(&fs_info->fixup_workers, "fixup", 1, - &fs_info->generic_worker); + fs_info->fixup_workers + btrfs_alloc_workqueue("fixup", flags, 1, 0); /* * endios are largely parallel and should have a very @@ -2528,7 +2528,6 @@ int open_ctree(struct super_block *sb, * return -ENOMEM if any of these fail. */ ret = btrfs_start_workers(&fs_info->generic_worker); - ret |= btrfs_start_workers(&fs_info->fixup_workers); ret |= btrfs_start_workers(&fs_info->delayed_workers); ret |= btrfs_start_workers(&fs_info->qgroup_rescan_workers); if (ret) { @@ -2541,7 +2540,8 @@ int open_ctree(struct super_block *sb, fs_info->endio_meta_write_workers && fs_info->endio_write_workers && fs_info->endio_raid56_workers && fs_info->endio_freespace_worker && fs_info->rmw_workers && - fs_info->caching_workers && fs_info->readahead_workers)) { + fs_info->caching_workers && fs_info->readahead_workers && + fs_info->fixup_workers)) { err = -ENOMEM; goto fail_sb_buffer; } diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index d4f8dfb..62e4fc2 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1727,10 +1727,10 @@ int btrfs_set_extent_delalloc(struct inode *inode, u64 start, u64 end, /* see btrfs_writepage_start_hook for details on why this is required */ struct btrfs_writepage_fixup { struct page *page; - struct btrfs_work work; + struct btrfs_work_struct work; }; -static void btrfs_writepage_fixup_worker(struct btrfs_work *work) +static void btrfs_writepage_fixup_worker(struct btrfs_work_struct *work) { struct btrfs_writepage_fixup *fixup; struct btrfs_ordered_extent *ordered; @@ -1821,9 +1821,9 @@ static int btrfs_writepage_start_hook(struct page *page, u64 start, u64 end) SetPageChecked(page); page_cache_get(page); - fixup->work.func = btrfs_writepage_fixup_worker; + btrfs_init_work(&fixup->work, btrfs_writepage_fixup_worker, NULL, NULL); fixup->page = page; - btrfs_queue_worker(&root->fs_info->fixup_workers, &fixup->work); + btrfs_queue_work(root->fs_info->fixup_workers, &fixup->work); return -EBUSY; } diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index 7a46e23..f7fd00c 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -1249,7 +1249,6 @@ static void btrfs_resize_thread_pool(struct btrfs_fs_info *fs_info, btrfs_workqueue_set_max(fs_info->delalloc_workers, new_pool_size); btrfs_workqueue_set_max(fs_info->submit_workers, new_pool_size); btrfs_workqueue_set_max(fs_info->caching_workers, new_pool_size); - btrfs_set_max_workers(&fs_info->fixup_workers, new_pool_size); btrfs_workqueue_set_max(fs_info->endio_workers, new_pool_size); btrfs_workqueue_set_max(fs_info->endio_meta_workers, new_pool_size); btrfs_workqueue_set_max(fs_info->endio_meta_write_workers, -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 14/18] btrfs: Replace fs_info->delayed_workers workqueue with btrfs_workqueue.
Replace the fs_info->delayed_workers with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> --- Changelog: v1->v2: None v2->v3: None v3->v4: - Use the simplified btrfs_alloc_workqueue API. --- fs/btrfs/ctree.h | 2 +- fs/btrfs/delayed-inode.c | 10 +++++----- fs/btrfs/disk-io.c | 10 ++++------ fs/btrfs/super.c | 2 +- 4 files changed, 11 insertions(+), 13 deletions(-) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 845615e..698cebc 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1509,7 +1509,7 @@ struct btrfs_fs_info { * for the sys_munmap function call path */ struct btrfs_workqueue_struct *fixup_workers; - struct btrfs_workers delayed_workers; + struct btrfs_workqueue_struct *delayed_workers; struct task_struct *transaction_kthread; struct task_struct *cleaner_kthread; int thread_pool_size; diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c index 8d292fb..e4ad5ea 100644 --- a/fs/btrfs/delayed-inode.c +++ b/fs/btrfs/delayed-inode.c @@ -1260,10 +1260,10 @@ void btrfs_remove_delayed_node(struct inode *inode) struct btrfs_async_delayed_work { struct btrfs_delayed_root *delayed_root; int nr; - struct btrfs_work work; + struct btrfs_work_struct work; }; -static void btrfs_async_run_delayed_root(struct btrfs_work *work) +static void btrfs_async_run_delayed_root(struct btrfs_work_struct *work) { struct btrfs_async_delayed_work *async_work; struct btrfs_delayed_root *delayed_root; @@ -1361,11 +1361,11 @@ static int btrfs_wq_run_delayed_node(struct btrfs_delayed_root *delayed_root, return -ENOMEM; async_work->delayed_root = delayed_root; - async_work->work.func = btrfs_async_run_delayed_root; - async_work->work.flags = 0; + btrfs_init_work(&async_work->work, btrfs_async_run_delayed_root, + NULL, NULL); async_work->nr = nr; - btrfs_queue_worker(&root->fs_info->delayed_workers, &async_work->work); + btrfs_queue_work(root->fs_info->delayed_workers, &async_work->work); return 0; } diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index e5dec5a..9053df8 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -2017,7 +2017,7 @@ static void btrfs_stop_all_workers(struct btrfs_fs_info *fs_info) btrfs_destroy_workqueue(fs_info->endio_write_workers); btrfs_destroy_workqueue(fs_info->endio_freespace_worker); btrfs_destroy_workqueue(fs_info->submit_workers); - btrfs_stop_workers(&fs_info->delayed_workers); + btrfs_destroy_workqueue(fs_info->delayed_workers); btrfs_destroy_workqueue(fs_info->caching_workers); btrfs_destroy_workqueue(fs_info->readahead_workers); btrfs_destroy_workqueue(fs_info->flush_workers); @@ -2515,9 +2515,8 @@ int open_ctree(struct super_block *sb, btrfs_alloc_workqueue("endio-write", flags, max_active, 2); fs_info->endio_freespace_worker btrfs_alloc_workqueue("freespace-write", flags, max_active, 0); - btrfs_init_workers(&fs_info->delayed_workers, "delayed-meta", - fs_info->thread_pool_size, - &fs_info->generic_worker); + fs_info->delayed_workers + btrfs_alloc_workqueue("delayed-meta", flags, max_active, 0); fs_info->readahead_workers btrfs_alloc_workqueue("readahead", flags, max_active, 2); btrfs_init_workers(&fs_info->qgroup_rescan_workers, "qgroup-rescan", 1, @@ -2528,7 +2527,6 @@ int open_ctree(struct super_block *sb, * return -ENOMEM if any of these fail. */ ret = btrfs_start_workers(&fs_info->generic_worker); - ret |= btrfs_start_workers(&fs_info->delayed_workers); ret |= btrfs_start_workers(&fs_info->qgroup_rescan_workers); if (ret) { err = -ENOMEM; @@ -2541,7 +2539,7 @@ int open_ctree(struct super_block *sb, fs_info->endio_write_workers && fs_info->endio_raid56_workers && fs_info->endio_freespace_worker && fs_info->rmw_workers && fs_info->caching_workers && fs_info->readahead_workers && - fs_info->fixup_workers)) { + fs_info->fixup_workers && fs_info->delayed_workers)) { err = -ENOMEM; goto fail_sb_buffer; } diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index f7fd00c..83d3477 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -1255,7 +1255,7 @@ static void btrfs_resize_thread_pool(struct btrfs_fs_info *fs_info, new_pool_size); btrfs_workqueue_set_max(fs_info->endio_write_workers, new_pool_size); btrfs_workqueue_set_max(fs_info->endio_freespace_worker, new_pool_size); - btrfs_set_max_workers(&fs_info->delayed_workers, new_pool_size); + btrfs_workqueue_set_max(fs_info->delayed_workers, new_pool_size); btrfs_workqueue_set_max(fs_info->readahead_workers, new_pool_size); btrfs_set_max_workers(&fs_info->scrub_wr_completion_workers, new_pool_size); -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 15/18] btrfs: Replace fs_info->qgroup_rescan_worker workqueue with btrfs_workqueue.
Replace the fs_info->qgroup_rescan_worker with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> --- Changelog: v1->v2: None v2->v3: - Use the btrfs_workqueue_struct to replace qgroup_rescan_workers. v3->v4: - Use the simplified btrfs_alloc_workqueue API. --- fs/btrfs/ctree.h | 4 ++-- fs/btrfs/disk-io.c | 10 +++++----- fs/btrfs/qgroup.c | 17 +++++++++-------- 3 files changed, 16 insertions(+), 15 deletions(-) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 698cebc..df51fa3 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1630,9 +1630,9 @@ struct btrfs_fs_info { /* qgroup rescan items */ struct mutex qgroup_rescan_lock; /* protects the progress item */ struct btrfs_key qgroup_rescan_progress; - struct btrfs_workers qgroup_rescan_workers; + struct btrfs_workqueue_struct *qgroup_rescan_workers; struct completion qgroup_rescan_completion; - struct btrfs_work qgroup_rescan_work; + struct btrfs_work_struct qgroup_rescan_work; /* filesystem state */ unsigned long fs_state; diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 9053df8..fb94e94 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -2021,7 +2021,7 @@ static void btrfs_stop_all_workers(struct btrfs_fs_info *fs_info) btrfs_destroy_workqueue(fs_info->caching_workers); btrfs_destroy_workqueue(fs_info->readahead_workers); btrfs_destroy_workqueue(fs_info->flush_workers); - btrfs_stop_workers(&fs_info->qgroup_rescan_workers); + btrfs_destroy_workqueue(fs_info->qgroup_rescan_workers); } static void free_root_extent_buffers(struct btrfs_root *root) @@ -2519,15 +2519,14 @@ int open_ctree(struct super_block *sb, btrfs_alloc_workqueue("delayed-meta", flags, max_active, 0); fs_info->readahead_workers btrfs_alloc_workqueue("readahead", flags, max_active, 2); - btrfs_init_workers(&fs_info->qgroup_rescan_workers, "qgroup-rescan", 1, - &fs_info->generic_worker); + fs_info->qgroup_rescan_workers + btrfs_alloc_workqueue("qgroup-rescan", flags, 1, 0); /* * btrfs_start_workers can really only fail because of ENOMEM so just * return -ENOMEM if any of these fail. */ ret = btrfs_start_workers(&fs_info->generic_worker); - ret |= btrfs_start_workers(&fs_info->qgroup_rescan_workers); if (ret) { err = -ENOMEM; goto fail_sb_buffer; @@ -2539,7 +2538,8 @@ int open_ctree(struct super_block *sb, fs_info->endio_write_workers && fs_info->endio_raid56_workers && fs_info->endio_freespace_worker && fs_info->rmw_workers && fs_info->caching_workers && fs_info->readahead_workers && - fs_info->fixup_workers && fs_info->delayed_workers)) { + fs_info->fixup_workers && fs_info->delayed_workers && + fs_info->qgroup_rescan_workers)) { err = -ENOMEM; goto fail_sb_buffer; } diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c index 4e6ef49..521144e 100644 --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -1516,8 +1516,8 @@ int btrfs_run_qgroups(struct btrfs_trans_handle *trans, ret = qgroup_rescan_init(fs_info, 0, 1); if (!ret) { qgroup_rescan_zero_tracking(fs_info); - btrfs_queue_worker(&fs_info->qgroup_rescan_workers, - &fs_info->qgroup_rescan_work); + btrfs_queue_work(fs_info->qgroup_rescan_workers, + &fs_info->qgroup_rescan_work); } ret = 0; } @@ -1981,7 +1981,7 @@ out: return ret; } -static void btrfs_qgroup_rescan_worker(struct btrfs_work *work) +static void btrfs_qgroup_rescan_worker(struct btrfs_work_struct *work) { struct btrfs_fs_info *fs_info = container_of(work, struct btrfs_fs_info, qgroup_rescan_work); @@ -2092,7 +2092,8 @@ qgroup_rescan_init(struct btrfs_fs_info *fs_info, u64 progress_objectid, memset(&fs_info->qgroup_rescan_work, 0, sizeof(fs_info->qgroup_rescan_work)); - fs_info->qgroup_rescan_work.func = btrfs_qgroup_rescan_worker; + btrfs_init_work(&fs_info->qgroup_rescan_work, + btrfs_qgroup_rescan_worker, NULL, NULL); if (ret) { err: @@ -2155,8 +2156,8 @@ btrfs_qgroup_rescan(struct btrfs_fs_info *fs_info) qgroup_rescan_zero_tracking(fs_info); - btrfs_queue_worker(&fs_info->qgroup_rescan_workers, - &fs_info->qgroup_rescan_work); + btrfs_queue_work(fs_info->qgroup_rescan_workers, + &fs_info->qgroup_rescan_work); return 0; } @@ -2187,6 +2188,6 @@ void btrfs_qgroup_rescan_resume(struct btrfs_fs_info *fs_info) { if (fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_RESCAN) - btrfs_queue_worker(&fs_info->qgroup_rescan_workers, - &fs_info->qgroup_rescan_work); + btrfs_queue_work(fs_info->qgroup_rescan_workers, + &fs_info->qgroup_rescan_work); } -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 16/18] btrfs: Replace fs_info->scrub_* workqueue with btrfs_workqueue.
Replace the fs_info->scrub_* with the newly created btrfs_workqueue. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> --- Changelog: v1->v2: None v2->v3: - Use the btrfs_workqueue_struct to replace scrub_*. v3->v4: - Use the simplified btrfs_alloc_workqueue API. --- fs/btrfs/ctree.h | 6 ++-- fs/btrfs/scrub.c | 93 ++++++++++++++++++++++++++++++-------------------------- fs/btrfs/super.c | 4 +-- 3 files changed, 55 insertions(+), 48 deletions(-) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index df51fa3..5d71258 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1587,9 +1587,9 @@ struct btrfs_fs_info { atomic_t scrub_cancel_req; wait_queue_head_t scrub_pause_wait; int scrub_workers_refcnt; - struct btrfs_workers scrub_workers; - struct btrfs_workers scrub_wr_completion_workers; - struct btrfs_workers scrub_nocow_workers; + struct btrfs_workqueue_struct *scrub_workers; + struct btrfs_workqueue_struct *scrub_wr_completion_workers; + struct btrfs_workqueue_struct *scrub_nocow_workers; #ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY u32 check_integrity_print_mask; diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 561e2f1..1618d6d 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -96,7 +96,8 @@ struct scrub_bio { #endif int page_count; int next_free; - struct btrfs_work work; + struct btrfs_work_struct + work; }; struct scrub_block { @@ -154,7 +155,8 @@ struct scrub_fixup_nodatasum { struct btrfs_device *dev; u64 logical; struct btrfs_root *root; - struct btrfs_work work; + struct btrfs_work_struct + work; int mirror_num; }; @@ -172,7 +174,8 @@ struct scrub_copy_nocow_ctx { int mirror_num; u64 physical_for_dev_replace; struct list_head inodes; - struct btrfs_work work; + struct btrfs_work_struct + work; }; struct scrub_warning { @@ -232,7 +235,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len, u64 gen, int mirror_num, u8 *csum, int force, u64 physical_for_dev_replace); static void scrub_bio_end_io(struct bio *bio, int err); -static void scrub_bio_end_io_worker(struct btrfs_work *work); +static void scrub_bio_end_io_worker(struct btrfs_work_struct *work); static void scrub_block_complete(struct scrub_block *sblock); static void scrub_remap_extent(struct btrfs_fs_info *fs_info, u64 extent_logical, u64 extent_len, @@ -249,14 +252,14 @@ static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx, struct scrub_page *spage); static void scrub_wr_submit(struct scrub_ctx *sctx); static void scrub_wr_bio_end_io(struct bio *bio, int err); -static void scrub_wr_bio_end_io_worker(struct btrfs_work *work); +static void scrub_wr_bio_end_io_worker(struct btrfs_work_struct *work); static int write_page_nocow(struct scrub_ctx *sctx, u64 physical_for_dev_replace, struct page *page); static int copy_nocow_pages_for_inode(u64 inum, u64 offset, u64 root, struct scrub_copy_nocow_ctx *ctx); static int copy_nocow_pages(struct scrub_ctx *sctx, u64 logical, u64 len, int mirror_num, u64 physical_for_dev_replace); -static void copy_nocow_pages_worker(struct btrfs_work *work); +static void copy_nocow_pages_worker(struct btrfs_work_struct *work); static void scrub_pending_bio_inc(struct scrub_ctx *sctx) @@ -394,7 +397,8 @@ struct scrub_ctx *scrub_setup_ctx(struct btrfs_device *dev, int is_dev_replace) sbio->index = i; sbio->sctx = sctx; sbio->page_count = 0; - sbio->work.func = scrub_bio_end_io_worker; + btrfs_init_work(&sbio->work, scrub_bio_end_io_worker, + NULL, NULL); if (i != SCRUB_BIOS_PER_SCTX - 1) sctx->bios[i]->next_free = i + 1; @@ -699,7 +703,7 @@ out: return -EIO; } -static void scrub_fixup_nodatasum(struct btrfs_work *work) +static void scrub_fixup_nodatasum(struct btrfs_work_struct *work) { int ret; struct scrub_fixup_nodatasum *fixup; @@ -965,9 +969,10 @@ nodatasum_case: fixup_nodatasum->root = fs_info->extent_root; fixup_nodatasum->mirror_num = failed_mirror_index + 1; scrub_pending_trans_workers_inc(sctx); - fixup_nodatasum->work.func = scrub_fixup_nodatasum; - btrfs_queue_worker(&fs_info->scrub_workers, - &fixup_nodatasum->work); + btrfs_init_work(&fixup_nodatasum->work, scrub_fixup_nodatasum, + NULL, NULL); + btrfs_queue_work(fs_info->scrub_workers, + &fixup_nodatasum->work); goto out; } @@ -1599,11 +1604,11 @@ static void scrub_wr_bio_end_io(struct bio *bio, int err) sbio->err = err; sbio->bio = bio; - sbio->work.func = scrub_wr_bio_end_io_worker; - btrfs_queue_worker(&fs_info->scrub_wr_completion_workers, &sbio->work); + btrfs_init_work(&sbio->work, scrub_wr_bio_end_io_worker, NULL, NULL); + btrfs_queue_work(fs_info->scrub_wr_completion_workers, &sbio->work); } -static void scrub_wr_bio_end_io_worker(struct btrfs_work *work) +static void scrub_wr_bio_end_io_worker(struct btrfs_work_struct *work) { struct scrub_bio *sbio = container_of(work, struct scrub_bio, work); struct scrub_ctx *sctx = sbio->sctx; @@ -2068,10 +2073,10 @@ static void scrub_bio_end_io(struct bio *bio, int err) sbio->err = err; sbio->bio = bio; - btrfs_queue_worker(&fs_info->scrub_workers, &sbio->work); + btrfs_queue_work(fs_info->scrub_workers, &sbio->work); } -static void scrub_bio_end_io_worker(struct btrfs_work *work) +static void scrub_bio_end_io_worker(struct btrfs_work_struct *work) { struct scrub_bio *sbio = container_of(work, struct scrub_bio, work); struct scrub_ctx *sctx = sbio->sctx; @@ -2785,33 +2790,35 @@ static noinline_for_stack int scrub_workers_get(struct btrfs_fs_info *fs_info, int is_dev_replace) { int ret = 0; + int flags = WQ_FREEZABLE | WQ_UNBOUND; + int max_active = fs_info->thread_pool_size; if (fs_info->scrub_workers_refcnt == 0) { if (is_dev_replace) - btrfs_init_workers(&fs_info->scrub_workers, "scrub", 1, - &fs_info->generic_worker); + fs_info->scrub_workers + btrfs_alloc_workqueue("btrfs-scrub", flags, + 1, 4); else - btrfs_init_workers(&fs_info->scrub_workers, "scrub", - fs_info->thread_pool_size, - &fs_info->generic_worker); - fs_info->scrub_workers.idle_thresh = 4; - ret = btrfs_start_workers(&fs_info->scrub_workers); - if (ret) + fs_info->scrub_workers + btrfs_alloc_workqueue("btrfs-scrub", flags, + max_active, 4); + if (!fs_info->scrub_workers) { + ret = -ENOMEM; goto out; - btrfs_init_workers(&fs_info->scrub_wr_completion_workers, - "scrubwrc", - fs_info->thread_pool_size, - &fs_info->generic_worker); - fs_info->scrub_wr_completion_workers.idle_thresh = 2; - ret = btrfs_start_workers( - &fs_info->scrub_wr_completion_workers); - if (ret) + } + fs_info->scrub_wr_completion_workers + btrfs_alloc_workqueue("btrfs-scrubwrc", flags, + max_active, 2); + if (!fs_info->scrub_wr_completion_workers) { + ret = -ENOMEM; goto out; - btrfs_init_workers(&fs_info->scrub_nocow_workers, "scrubnc", 1, - &fs_info->generic_worker); - ret = btrfs_start_workers(&fs_info->scrub_nocow_workers); - if (ret) + } + fs_info->scrub_nocow_workers + btrfs_alloc_workqueue("btrfs-scrubnc", flags, 1, 0); + if (!fs_info->scrub_nocow_workers) { + ret = -ENOMEM; goto out; + } } ++fs_info->scrub_workers_refcnt; out: @@ -2821,9 +2828,9 @@ out: static noinline_for_stack void scrub_workers_put(struct btrfs_fs_info *fs_info) { if (--fs_info->scrub_workers_refcnt == 0) { - btrfs_stop_workers(&fs_info->scrub_workers); - btrfs_stop_workers(&fs_info->scrub_wr_completion_workers); - btrfs_stop_workers(&fs_info->scrub_nocow_workers); + btrfs_destroy_workqueue(fs_info->scrub_workers); + btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers); + btrfs_destroy_workqueue(fs_info->scrub_nocow_workers); } WARN_ON(fs_info->scrub_workers_refcnt < 0); } @@ -3125,10 +3132,10 @@ static int copy_nocow_pages(struct scrub_ctx *sctx, u64 logical, u64 len, nocow_ctx->len = len; nocow_ctx->mirror_num = mirror_num; nocow_ctx->physical_for_dev_replace = physical_for_dev_replace; - nocow_ctx->work.func = copy_nocow_pages_worker; + btrfs_init_work(&nocow_ctx->work, copy_nocow_pages_worker, NULL, NULL); INIT_LIST_HEAD(&nocow_ctx->inodes); - btrfs_queue_worker(&fs_info->scrub_nocow_workers, - &nocow_ctx->work); + btrfs_queue_work(fs_info->scrub_nocow_workers, + &nocow_ctx->work); return 0; } @@ -3150,7 +3157,7 @@ static int record_inode_for_nocow(u64 inum, u64 offset, u64 root, void *ctx) #define COPY_COMPLETE 1 -static void copy_nocow_pages_worker(struct btrfs_work *work) +static void copy_nocow_pages_worker(struct btrfs_work_struct *work) { struct scrub_copy_nocow_ctx *nocow_ctx container_of(work, struct scrub_copy_nocow_ctx, work); diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index 83d3477..2859cc3 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -1257,8 +1257,8 @@ static void btrfs_resize_thread_pool(struct btrfs_fs_info *fs_info, btrfs_workqueue_set_max(fs_info->endio_freespace_worker, new_pool_size); btrfs_workqueue_set_max(fs_info->delayed_workers, new_pool_size); btrfs_workqueue_set_max(fs_info->readahead_workers, new_pool_size); - btrfs_set_max_workers(&fs_info->scrub_wr_completion_workers, - new_pool_size); + btrfs_workqueue_set_max(fs_info->scrub_wr_completion_workers, + new_pool_size); } static inline void btrfs_remount_prepare(struct btrfs_fs_info *fs_info) -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Since all the btrfs_worker is replaced with the newly created btrfs_workqueue, the old codes can be easily remove. Signed-off-by: Quwenruo <quwenruo@cn.fujitsu.com> --- Changelog: v1->v2: None v2->v3: - Reuse the old async-thred.[ch] files. v3->v4: - Reuse the old WORK_* bits. --- fs/btrfs/async-thread.c | 706 +----------------------------------------------- fs/btrfs/async-thread.h | 100 ------- fs/btrfs/ctree.h | 1 - fs/btrfs/disk-io.c | 12 - fs/btrfs/super.c | 8 - 5 files changed, 3 insertions(+), 824 deletions(-) diff --git a/fs/btrfs/async-thread.c b/fs/btrfs/async-thread.c index a986be7..16a5eec 100644 --- a/fs/btrfs/async-thread.c +++ b/fs/btrfs/async-thread.c @@ -25,713 +25,13 @@ #include <linux/workqueue.h> #include "async-thread.h" -#define WORK_QUEUED_BIT 0 -#define WORK_DONE_BIT 1 -#define WORK_ORDER_DONE_BIT 2 -#define WORK_HIGH_PRIO_BIT 3 +#define WORK_DONE_BIT 0 +#define WORK_ORDER_DONE_BIT 1 +#define WORK_HIGH_PRIO_BIT 2 #define NO_THRESHOLD (-1) #define DFT_THRESHOLD (32) -/* - * container for the kthread task pointer and the list of pending work - * One of these is allocated per thread. - */ -struct btrfs_worker_thread { - /* pool we belong to */ - struct btrfs_workers *workers; - - /* list of struct btrfs_work that are waiting for service */ - struct list_head pending; - struct list_head prio_pending; - - /* list of worker threads from struct btrfs_workers */ - struct list_head worker_list; - - /* kthread */ - struct task_struct *task; - - /* number of things on the pending list */ - atomic_t num_pending; - - /* reference counter for this struct */ - atomic_t refs; - - unsigned long sequence; - - /* protects the pending list. */ - spinlock_t lock; - - /* set to non-zero when this thread is already awake and kicking */ - int working; - - /* are we currently idle */ - int idle; -}; - -static int __btrfs_start_workers(struct btrfs_workers *workers); - -/* - * btrfs_start_workers uses kthread_run, which can block waiting for memory - * for a very long time. It will actually throttle on page writeback, - * and so it may not make progress until after our btrfs worker threads - * process all of the pending work structs in their queue - * - * This means we can''t use btrfs_start_workers from inside a btrfs worker - * thread that is used as part of cleaning dirty memory, which pretty much - * involves all of the worker threads. - * - * Instead we have a helper queue who never has more than one thread - * where we scheduler thread start operations. This worker_start struct - * is used to contain the work and hold a pointer to the queue that needs - * another worker. - */ -struct worker_start { - struct btrfs_work work; - struct btrfs_workers *queue; -}; - -static void start_new_worker_func(struct btrfs_work *work) -{ - struct worker_start *start; - start = container_of(work, struct worker_start, work); - __btrfs_start_workers(start->queue); - kfree(start); -} - -/* - * helper function to move a thread onto the idle list after it - * has finished some requests. - */ -static void check_idle_worker(struct btrfs_worker_thread *worker) -{ - if (!worker->idle && atomic_read(&worker->num_pending) < - worker->workers->idle_thresh / 2) { - unsigned long flags; - spin_lock_irqsave(&worker->workers->lock, flags); - worker->idle = 1; - - /* the list may be empty if the worker is just starting */ - if (!list_empty(&worker->worker_list) && - !worker->workers->stopping) { - list_move(&worker->worker_list, - &worker->workers->idle_list); - } - spin_unlock_irqrestore(&worker->workers->lock, flags); - } -} - -/* - * helper function to move a thread off the idle list after new - * pending work is added. - */ -static void check_busy_worker(struct btrfs_worker_thread *worker) -{ - if (worker->idle && atomic_read(&worker->num_pending) >- worker->workers->idle_thresh) { - unsigned long flags; - spin_lock_irqsave(&worker->workers->lock, flags); - worker->idle = 0; - - if (!list_empty(&worker->worker_list) && - !worker->workers->stopping) { - list_move_tail(&worker->worker_list, - &worker->workers->worker_list); - } - spin_unlock_irqrestore(&worker->workers->lock, flags); - } -} - -static void check_pending_worker_creates(struct btrfs_worker_thread *worker) -{ - struct btrfs_workers *workers = worker->workers; - struct worker_start *start; - unsigned long flags; - - rmb(); - if (!workers->atomic_start_pending) - return; - - start = kzalloc(sizeof(*start), GFP_NOFS); - if (!start) - return; - - start->work.func = start_new_worker_func; - start->queue = workers; - - spin_lock_irqsave(&workers->lock, flags); - if (!workers->atomic_start_pending) - goto out; - - workers->atomic_start_pending = 0; - if (workers->num_workers + workers->num_workers_starting >- workers->max_workers) - goto out; - - workers->num_workers_starting += 1; - spin_unlock_irqrestore(&workers->lock, flags); - btrfs_queue_worker(workers->atomic_worker_start, &start->work); - return; - -out: - kfree(start); - spin_unlock_irqrestore(&workers->lock, flags); -} - -static noinline void run_ordered_completions(struct btrfs_workers *workers, - struct btrfs_work *work) -{ - if (!workers->ordered) - return; - - set_bit(WORK_DONE_BIT, &work->flags); - - spin_lock(&workers->order_lock); - - while (1) { - if (!list_empty(&workers->prio_order_list)) { - work = list_entry(workers->prio_order_list.next, - struct btrfs_work, order_list); - } else if (!list_empty(&workers->order_list)) { - work = list_entry(workers->order_list.next, - struct btrfs_work, order_list); - } else { - break; - } - if (!test_bit(WORK_DONE_BIT, &work->flags)) - break; - - /* we are going to call the ordered done function, but - * we leave the work item on the list as a barrier so - * that later work items that are done don''t have their - * functions called before this one returns - */ - if (test_and_set_bit(WORK_ORDER_DONE_BIT, &work->flags)) - break; - - spin_unlock(&workers->order_lock); - - work->ordered_func(work); - - /* now take the lock again and drop our item from the list */ - spin_lock(&workers->order_lock); - list_del(&work->order_list); - spin_unlock(&workers->order_lock); - - /* - * we don''t want to call the ordered free functions - * with the lock held though - */ - work->ordered_free(work); - spin_lock(&workers->order_lock); - } - - spin_unlock(&workers->order_lock); -} - -static void put_worker(struct btrfs_worker_thread *worker) -{ - if (atomic_dec_and_test(&worker->refs)) - kfree(worker); -} - -static int try_worker_shutdown(struct btrfs_worker_thread *worker) -{ - int freeit = 0; - - spin_lock_irq(&worker->lock); - spin_lock(&worker->workers->lock); - if (worker->workers->num_workers > 1 && - worker->idle && - !worker->working && - !list_empty(&worker->worker_list) && - list_empty(&worker->prio_pending) && - list_empty(&worker->pending) && - atomic_read(&worker->num_pending) == 0) { - freeit = 1; - list_del_init(&worker->worker_list); - worker->workers->num_workers--; - } - spin_unlock(&worker->workers->lock); - spin_unlock_irq(&worker->lock); - - if (freeit) - put_worker(worker); - return freeit; -} - -static struct btrfs_work *get_next_work(struct btrfs_worker_thread *worker, - struct list_head *prio_head, - struct list_head *head) -{ - struct btrfs_work *work = NULL; - struct list_head *cur = NULL; - - if (!list_empty(prio_head)) - cur = prio_head->next; - - smp_mb(); - if (!list_empty(&worker->prio_pending)) - goto refill; - - if (!list_empty(head)) - cur = head->next; - - if (cur) - goto out; - -refill: - spin_lock_irq(&worker->lock); - list_splice_tail_init(&worker->prio_pending, prio_head); - list_splice_tail_init(&worker->pending, head); - - if (!list_empty(prio_head)) - cur = prio_head->next; - else if (!list_empty(head)) - cur = head->next; - spin_unlock_irq(&worker->lock); - - if (!cur) - goto out_fail; - -out: - work = list_entry(cur, struct btrfs_work, list); - -out_fail: - return work; -} - -/* - * main loop for servicing work items - */ -static int worker_loop(void *arg) -{ - struct btrfs_worker_thread *worker = arg; - struct list_head head; - struct list_head prio_head; - struct btrfs_work *work; - - INIT_LIST_HEAD(&head); - INIT_LIST_HEAD(&prio_head); - - do { -again: - while (1) { - - - work = get_next_work(worker, &prio_head, &head); - if (!work) - break; - - list_del(&work->list); - clear_bit(WORK_QUEUED_BIT, &work->flags); - - work->worker = worker; - - work->func(work); - - atomic_dec(&worker->num_pending); - /* - * unless this is an ordered work queue, - * ''work'' was probably freed by func above. - */ - run_ordered_completions(worker->workers, work); - - check_pending_worker_creates(worker); - cond_resched(); - } - - spin_lock_irq(&worker->lock); - check_idle_worker(worker); - - if (freezing(current)) { - worker->working = 0; - spin_unlock_irq(&worker->lock); - try_to_freeze(); - } else { - spin_unlock_irq(&worker->lock); - if (!kthread_should_stop()) { - cpu_relax(); - /* - * we''ve dropped the lock, did someone else - * jump_in? - */ - smp_mb(); - if (!list_empty(&worker->pending) || - !list_empty(&worker->prio_pending)) - continue; - - /* - * this short schedule allows more work to - * come in without the queue functions - * needing to go through wake_up_process() - * - * worker->working is still 1, so nobody - * is going to try and wake us up - */ - schedule_timeout(1); - smp_mb(); - if (!list_empty(&worker->pending) || - !list_empty(&worker->prio_pending)) - continue; - - if (kthread_should_stop()) - break; - - /* still no more work?, sleep for real */ - spin_lock_irq(&worker->lock); - set_current_state(TASK_INTERRUPTIBLE); - if (!list_empty(&worker->pending) || - !list_empty(&worker->prio_pending)) { - spin_unlock_irq(&worker->lock); - set_current_state(TASK_RUNNING); - goto again; - } - - /* - * this makes sure we get a wakeup when someone - * adds something new to the queue - */ - worker->working = 0; - spin_unlock_irq(&worker->lock); - - if (!kthread_should_stop()) { - schedule_timeout(HZ * 120); - if (!worker->working && - try_worker_shutdown(worker)) { - return 0; - } - } - } - __set_current_state(TASK_RUNNING); - } - } while (!kthread_should_stop()); - return 0; -} - -/* - * this will wait for all the worker threads to shutdown - */ -void btrfs_stop_workers(struct btrfs_workers *workers) -{ - struct list_head *cur; - struct btrfs_worker_thread *worker; - int can_stop; - - spin_lock_irq(&workers->lock); - workers->stopping = 1; - list_splice_init(&workers->idle_list, &workers->worker_list); - while (!list_empty(&workers->worker_list)) { - cur = workers->worker_list.next; - worker = list_entry(cur, struct btrfs_worker_thread, - worker_list); - - atomic_inc(&worker->refs); - workers->num_workers -= 1; - if (!list_empty(&worker->worker_list)) { - list_del_init(&worker->worker_list); - put_worker(worker); - can_stop = 1; - } else - can_stop = 0; - spin_unlock_irq(&workers->lock); - if (can_stop) - kthread_stop(worker->task); - spin_lock_irq(&workers->lock); - put_worker(worker); - } - spin_unlock_irq(&workers->lock); -} - -/* - * simple init on struct btrfs_workers - */ -void btrfs_init_workers(struct btrfs_workers *workers, char *name, int max, - struct btrfs_workers *async_helper) -{ - workers->num_workers = 0; - workers->num_workers_starting = 0; - INIT_LIST_HEAD(&workers->worker_list); - INIT_LIST_HEAD(&workers->idle_list); - INIT_LIST_HEAD(&workers->order_list); - INIT_LIST_HEAD(&workers->prio_order_list); - spin_lock_init(&workers->lock); - spin_lock_init(&workers->order_lock); - workers->max_workers = max; - workers->idle_thresh = 32; - workers->name = name; - workers->ordered = 0; - workers->atomic_start_pending = 0; - workers->atomic_worker_start = async_helper; - workers->stopping = 0; -} - -/* - * starts new worker threads. This does not enforce the max worker - * count in case you need to temporarily go past it. - */ -static int __btrfs_start_workers(struct btrfs_workers *workers) -{ - struct btrfs_worker_thread *worker; - int ret = 0; - - worker = kzalloc(sizeof(*worker), GFP_NOFS); - if (!worker) { - ret = -ENOMEM; - goto fail; - } - - INIT_LIST_HEAD(&worker->pending); - INIT_LIST_HEAD(&worker->prio_pending); - INIT_LIST_HEAD(&worker->worker_list); - spin_lock_init(&worker->lock); - - atomic_set(&worker->num_pending, 0); - atomic_set(&worker->refs, 1); - worker->workers = workers; - worker->task = kthread_create(worker_loop, worker, - "btrfs-%s-%d", workers->name, - workers->num_workers + 1); - if (IS_ERR(worker->task)) { - ret = PTR_ERR(worker->task); - goto fail; - } - - spin_lock_irq(&workers->lock); - if (workers->stopping) { - spin_unlock_irq(&workers->lock); - ret = -EINVAL; - goto fail_kthread; - } - list_add_tail(&worker->worker_list, &workers->idle_list); - worker->idle = 1; - workers->num_workers++; - workers->num_workers_starting--; - WARN_ON(workers->num_workers_starting < 0); - spin_unlock_irq(&workers->lock); - - wake_up_process(worker->task); - return 0; - -fail_kthread: - kthread_stop(worker->task); -fail: - kfree(worker); - spin_lock_irq(&workers->lock); - workers->num_workers_starting--; - spin_unlock_irq(&workers->lock); - return ret; -} - -int btrfs_start_workers(struct btrfs_workers *workers) -{ - spin_lock_irq(&workers->lock); - workers->num_workers_starting++; - spin_unlock_irq(&workers->lock); - return __btrfs_start_workers(workers); -} - -/* - * run through the list and find a worker thread that doesn''t have a lot - * to do right now. This can return null if we aren''t yet at the thread - * count limit and all of the threads are busy. - */ -static struct btrfs_worker_thread *next_worker(struct btrfs_workers *workers) -{ - struct btrfs_worker_thread *worker; - struct list_head *next; - int enforce_min; - - enforce_min = (workers->num_workers + workers->num_workers_starting) < - workers->max_workers; - - /* - * if we find an idle thread, don''t move it to the end of the - * idle list. This improves the chance that the next submission - * will reuse the same thread, and maybe catch it while it is still - * working - */ - if (!list_empty(&workers->idle_list)) { - next = workers->idle_list.next; - worker = list_entry(next, struct btrfs_worker_thread, - worker_list); - return worker; - } - if (enforce_min || list_empty(&workers->worker_list)) - return NULL; - - /* - * if we pick a busy task, move the task to the end of the list. - * hopefully this will keep things somewhat evenly balanced. - * Do the move in batches based on the sequence number. This groups - * requests submitted at roughly the same time onto the same worker. - */ - next = workers->worker_list.next; - worker = list_entry(next, struct btrfs_worker_thread, worker_list); - worker->sequence++; - - if (worker->sequence % workers->idle_thresh == 0) - list_move_tail(next, &workers->worker_list); - return worker; -} - -/* - * selects a worker thread to take the next job. This will either find - * an idle worker, start a new worker up to the max count, or just return - * one of the existing busy workers. - */ -static struct btrfs_worker_thread *find_worker(struct btrfs_workers *workers) -{ - struct btrfs_worker_thread *worker; - unsigned long flags; - struct list_head *fallback; - int ret; - - spin_lock_irqsave(&workers->lock, flags); -again: - worker = next_worker(workers); - - if (!worker) { - if (workers->num_workers + workers->num_workers_starting >- workers->max_workers) { - goto fallback; - } else if (workers->atomic_worker_start) { - workers->atomic_start_pending = 1; - goto fallback; - } else { - workers->num_workers_starting++; - spin_unlock_irqrestore(&workers->lock, flags); - /* we''re below the limit, start another worker */ - ret = __btrfs_start_workers(workers); - spin_lock_irqsave(&workers->lock, flags); - if (ret) - goto fallback; - goto again; - } - } - goto found; - -fallback: - fallback = NULL; - /* - * we have failed to find any workers, just - * return the first one we can find. - */ - if (!list_empty(&workers->worker_list)) - fallback = workers->worker_list.next; - if (!list_empty(&workers->idle_list)) - fallback = workers->idle_list.next; - BUG_ON(!fallback); - worker = list_entry(fallback, - struct btrfs_worker_thread, worker_list); -found: - /* - * this makes sure the worker doesn''t exit before it is placed - * onto a busy/idle list - */ - atomic_inc(&worker->num_pending); - spin_unlock_irqrestore(&workers->lock, flags); - return worker; -} - -/* - * btrfs_requeue_work just puts the work item back on the tail of the list - * it was taken from. It is intended for use with long running work functions - * that make some progress and want to give the cpu up for others. - */ -void btrfs_requeue_work(struct btrfs_work *work) -{ - struct btrfs_worker_thread *worker = work->worker; - unsigned long flags; - int wake = 0; - - if (test_and_set_bit(WORK_QUEUED_BIT, &work->flags)) - return; - - spin_lock_irqsave(&worker->lock, flags); - if (test_bit(WORK_HIGH_PRIO_BIT, &work->flags)) - list_add_tail(&work->list, &worker->prio_pending); - else - list_add_tail(&work->list, &worker->pending); - atomic_inc(&worker->num_pending); - - /* by definition we''re busy, take ourselves off the idle - * list - */ - if (worker->idle) { - spin_lock(&worker->workers->lock); - worker->idle = 0; - list_move_tail(&worker->worker_list, - &worker->workers->worker_list); - spin_unlock(&worker->workers->lock); - } - if (!worker->working) { - wake = 1; - worker->working = 1; - } - - if (wake) - wake_up_process(worker->task); - spin_unlock_irqrestore(&worker->lock, flags); -} - -void btrfs_set_work_high_prio(struct btrfs_work *work) -{ - set_bit(WORK_HIGH_PRIO_BIT, &work->flags); -} - -/* - * places a struct btrfs_work into the pending queue of one of the kthreads - */ -void btrfs_queue_worker(struct btrfs_workers *workers, struct btrfs_work *work) -{ - struct btrfs_worker_thread *worker; - unsigned long flags; - int wake = 0; - - /* don''t requeue something already on a list */ - if (test_and_set_bit(WORK_QUEUED_BIT, &work->flags)) - return; - - worker = find_worker(workers); - if (workers->ordered) { - /* - * you''re not allowed to do ordered queues from an - * interrupt handler - */ - spin_lock(&workers->order_lock); - if (test_bit(WORK_HIGH_PRIO_BIT, &work->flags)) { - list_add_tail(&work->order_list, - &workers->prio_order_list); - } else { - list_add_tail(&work->order_list, &workers->order_list); - } - spin_unlock(&workers->order_lock); - } else { - INIT_LIST_HEAD(&work->order_list); - } - - spin_lock_irqsave(&worker->lock, flags); - - if (test_bit(WORK_HIGH_PRIO_BIT, &work->flags)) - list_add_tail(&work->list, &worker->prio_pending); - else - list_add_tail(&work->list, &worker->pending); - check_busy_worker(worker); - - /* - * avoid calling into wake_up_process if this thread has already - * been kicked - */ - if (!worker->working) - wake = 1; - worker->working = 1; - - if (wake) - wake_up_process(worker->task); - spin_unlock_irqrestore(&worker->lock, flags); -} - struct __btrfs_workqueue_struct { struct workqueue_struct *normal_wq; /* List head pointing to ordered work list */ diff --git a/fs/btrfs/async-thread.h b/fs/btrfs/async-thread.h index eec4a0f..67bdde2 100644 --- a/fs/btrfs/async-thread.h +++ b/fs/btrfs/async-thread.h @@ -20,106 +20,6 @@ #ifndef __BTRFS_ASYNC_THREAD_ #define __BTRFS_ASYNC_THREAD_ -struct btrfs_worker_thread; - -/* - * This is similar to a workqueue, but it is meant to spread the operations - * across all available cpus instead of just the CPU that was used to - * queue the work. There is also some batching introduced to try and - * cut down on context switches. - * - * By default threads are added on demand up to 2 * the number of cpus. - * Changing struct btrfs_workers->max_workers is one way to prevent - * demand creation of kthreads. - * - * the basic model of these worker threads is to embed a btrfs_work - * structure in your own data struct, and use container_of in a - * work function to get back to your data struct. - */ -struct btrfs_work { - /* - * func should be set to the function you want called - * your work struct is passed as the only arg - * - * ordered_func must be set for work sent to an ordered work queue, - * and it is called to complete a given work item in the same - * order they were sent to the queue. - */ - void (*func)(struct btrfs_work *work); - void (*ordered_func)(struct btrfs_work *work); - void (*ordered_free)(struct btrfs_work *work); - - /* - * flags should be set to zero. It is used to make sure the - * struct is only inserted once into the list. - */ - unsigned long flags; - - /* don''t touch these */ - struct btrfs_worker_thread *worker; - struct list_head list; - struct list_head order_list; -}; - -struct btrfs_workers { - /* current number of running workers */ - int num_workers; - - int num_workers_starting; - - /* max number of workers allowed. changed by btrfs_start_workers */ - int max_workers; - - /* once a worker has this many requests or fewer, it is idle */ - int idle_thresh; - - /* force completions in the order they were queued */ - int ordered; - - /* more workers required, but in an interrupt handler */ - int atomic_start_pending; - - /* - * are we allowed to sleep while starting workers or are we required - * to start them at a later time? If we can''t sleep, this indicates - * which queue we need to use to schedule thread creation. - */ - struct btrfs_workers *atomic_worker_start; - - /* list with all the work threads. The workers on the idle thread - * may be actively servicing jobs, but they haven''t yet hit the - * idle thresh limit above. - */ - struct list_head worker_list; - struct list_head idle_list; - - /* - * when operating in ordered mode, this maintains the list - * of work items waiting for completion - */ - struct list_head order_list; - struct list_head prio_order_list; - - /* lock for finding the next worker thread to queue on */ - spinlock_t lock; - - /* lock for the ordered lists */ - spinlock_t order_lock; - - /* extra name for this worker, used for current->name */ - char *name; - - int stopping; -}; - -void btrfs_queue_worker(struct btrfs_workers *workers, struct btrfs_work *work); -int btrfs_start_workers(struct btrfs_workers *workers); -void btrfs_stop_workers(struct btrfs_workers *workers); -void btrfs_init_workers(struct btrfs_workers *workers, char *name, int max, - struct btrfs_workers *async_starter); -void btrfs_requeue_work(struct btrfs_work *work); -void btrfs_set_work_high_prio(struct btrfs_work *work); - struct btrfs_workqueue_struct; /* Internal use only */ struct __btrfs_workqueue_struct; diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 5d71258..f5f3de0 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1488,7 +1488,6 @@ struct btrfs_fs_info { * A third pool does submit_bio to avoid deadlocking with the other * two */ - struct btrfs_workers generic_worker; struct btrfs_workqueue_struct *workers; struct btrfs_workqueue_struct *delalloc_workers; struct btrfs_workqueue_struct *flush_workers; diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index fb94e94..539378d 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -2005,7 +2005,6 @@ static noinline int next_root_backup(struct btrfs_fs_info *info, /* helper to cleanup workers */ static void btrfs_stop_all_workers(struct btrfs_fs_info *fs_info) { - btrfs_stop_workers(&fs_info->generic_worker); btrfs_destroy_workqueue(fs_info->fixup_workers); btrfs_destroy_workqueue(fs_info->delalloc_workers); btrfs_destroy_workqueue(fs_info->workers); @@ -2468,8 +2467,6 @@ int open_ctree(struct super_block *sb, } max_active = fs_info->thread_pool_size; - btrfs_init_workers(&fs_info->generic_worker, - "genwork", 1, NULL); fs_info->workers btrfs_alloc_workqueue("worker", flags | WQ_HIGHPRI, @@ -2522,15 +2519,6 @@ int open_ctree(struct super_block *sb, fs_info->qgroup_rescan_workers btrfs_alloc_workqueue("qgroup-rescan", flags, 1, 0); - /* - * btrfs_start_workers can really only fail because of ENOMEM so just - * return -ENOMEM if any of these fail. - */ - ret = btrfs_start_workers(&fs_info->generic_worker); - if (ret) { - err = -ENOMEM; - goto fail_sb_buffer; - } if (!(fs_info->workers && fs_info->delalloc_workers && fs_info->submit_workers && fs_info->flush_workers && fs_info->endio_workers && fs_info->endio_meta_workers && diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index 2859cc3..aa3efe2 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -1226,13 +1226,6 @@ error_fs_info: return ERR_PTR(error); } -static void btrfs_set_max_workers(struct btrfs_workers *workers, int new_limit) -{ - spin_lock_irq(&workers->lock); - workers->max_workers = new_limit; - spin_unlock_irq(&workers->lock); -} - static void btrfs_resize_thread_pool(struct btrfs_fs_info *fs_info, int new_pool_size, int old_pool_size) { @@ -1244,7 +1237,6 @@ static void btrfs_resize_thread_pool(struct btrfs_fs_info *fs_info, printk(KERN_INFO "btrfs: resize thread pool %d -> %d\n", old_pool_size, new_pool_size); - btrfs_set_max_workers(&fs_info->generic_worker, new_pool_size); btrfs_workqueue_set_max(fs_info->workers, new_pool_size); btrfs_workqueue_set_max(fs_info->delalloc_workers, new_pool_size); btrfs_workqueue_set_max(fs_info->submit_workers, new_pool_size); -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Qu Wenruo
2013-Dec-17 09:31 UTC
[PATCH v4 18/18] btrfs: Cleanup the "_struct" suffix in btrfs_workequeue
Since the "_struct" suffix is mainly used for distinguish the differnt btrfs_work between the original and the newly created one, there is no need using the suffix since all btrfs_workers are changed into btrfs_workqueue. Also this patch fixed some codes whose code style is changed due to the too long "_struct" suffix. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> --- Changelog: v3->v4: - Remove the "_struct" suffix. --- fs/btrfs/async-thread.c | 64 ++++++++++++++++++++++++------------------------ fs/btrfs/async-thread.h | 34 ++++++++++++------------- fs/btrfs/ctree.h | 44 ++++++++++++++++----------------- fs/btrfs/delayed-inode.c | 4 +-- fs/btrfs/disk-io.c | 14 +++++------ fs/btrfs/extent-tree.c | 2 +- fs/btrfs/inode.c | 18 +++++++------- fs/btrfs/ordered-data.c | 2 +- fs/btrfs/ordered-data.h | 4 +-- fs/btrfs/qgroup.c | 2 +- fs/btrfs/raid56.c | 14 +++++------ fs/btrfs/reada.c | 5 ++-- fs/btrfs/scrub.c | 23 ++++++++--------- fs/btrfs/volumes.c | 2 +- fs/btrfs/volumes.h | 2 +- 15 files changed, 115 insertions(+), 119 deletions(-) diff --git a/fs/btrfs/async-thread.c b/fs/btrfs/async-thread.c index 16a5eec..f896426 100644 --- a/fs/btrfs/async-thread.c +++ b/fs/btrfs/async-thread.c @@ -32,7 +32,7 @@ #define NO_THRESHOLD (-1) #define DFT_THRESHOLD (32) -struct __btrfs_workqueue_struct { +struct __btrfs_workqueue { struct workqueue_struct *normal_wq; /* List head pointing to ordered work list */ struct list_head ordered_list; @@ -49,15 +49,15 @@ struct __btrfs_workqueue_struct { spinlock_t thres_lock; }; -struct btrfs_workqueue_struct { - struct __btrfs_workqueue_struct *normal; - struct __btrfs_workqueue_struct *high; +struct btrfs_workqueue { + struct __btrfs_workqueue *normal; + struct __btrfs_workqueue *high; }; -static inline struct __btrfs_workqueue_struct +static inline struct __btrfs_workqueue *__btrfs_alloc_workqueue(char *name, int flags, int max_active, int thresh) { - struct __btrfs_workqueue_struct *ret = kzalloc(sizeof(*ret), GFP_NOFS); + struct __btrfs_workqueue *ret = kzalloc(sizeof(*ret), GFP_NOFS); if (unlikely(!ret)) return NULL; @@ -95,14 +95,14 @@ static inline struct __btrfs_workqueue_struct } static inline void -__btrfs_destroy_workqueue(struct __btrfs_workqueue_struct *wq); +__btrfs_destroy_workqueue(struct __btrfs_workqueue *wq); -struct btrfs_workqueue_struct *btrfs_alloc_workqueue(char *name, - int flags, - int max_active, - int thresh) +struct btrfs_workqueue *btrfs_alloc_workqueue(char *name, + int flags, + int max_active, + int thresh) { - struct btrfs_workqueue_struct *ret = kzalloc(sizeof(*ret), GFP_NOFS); + struct btrfs_workqueue *ret = kzalloc(sizeof(*ret), GFP_NOFS); if (unlikely(!ret)) return NULL; @@ -131,7 +131,7 @@ struct btrfs_workqueue_struct *btrfs_alloc_workqueue(char *name, * This hook WILL be called in IRQ handler context, * so workqueue_set_max_active MUST NOT be called in this hook */ -static inline void thresh_queue_hook(struct __btrfs_workqueue_struct *wq) +static inline void thresh_queue_hook(struct __btrfs_workqueue *wq) { if (wq->thresh == NO_THRESHOLD) return; @@ -143,7 +143,7 @@ static inline void thresh_queue_hook(struct __btrfs_workqueue_struct *wq) * This hook is called in kthread content. * So workqueue_set_max_active is called here. */ -static inline void thresh_exec_hook(struct __btrfs_workqueue_struct *wq) +static inline void thresh_exec_hook(struct __btrfs_workqueue *wq) { int new_max_active; long pending; @@ -186,10 +186,10 @@ out: } } -static void run_ordered_work(struct __btrfs_workqueue_struct *wq) +static void run_ordered_work(struct __btrfs_workqueue *wq) { struct list_head *list = &wq->ordered_list; - struct btrfs_work_struct *work; + struct btrfs_work *work; spinlock_t *lock = &wq->list_lock; unsigned long flags; @@ -197,7 +197,7 @@ static void run_ordered_work(struct __btrfs_workqueue_struct *wq) spin_lock_irqsave(lock, flags); if (list_empty(list)) break; - work = list_entry(list->next, struct btrfs_work_struct, + work = list_entry(list->next, struct btrfs_work, ordered_list); if (!test_bit(WORK_DONE_BIT, &work->flags)) break; @@ -229,10 +229,10 @@ static void run_ordered_work(struct __btrfs_workqueue_struct *wq) static void normal_work_helper(struct work_struct *arg) { - struct btrfs_work_struct *work; + struct btrfs_work *work; int need_order = 0; - work = container_of(arg, struct btrfs_work_struct, normal_work); + work = container_of(arg, struct btrfs_work, normal_work); /* * Some work may free the whole struct, we should check before * executing the func. @@ -247,10 +247,10 @@ static void normal_work_helper(struct work_struct *arg) } } -void btrfs_init_work(struct btrfs_work_struct *work, - void (*func)(struct btrfs_work_struct *), - void (*ordered_func)(struct btrfs_work_struct *), - void (*ordered_free)(struct btrfs_work_struct *)) +void btrfs_init_work(struct btrfs_work *work, + void (*func)(struct btrfs_work *), + void (*ordered_func)(struct btrfs_work *), + void (*ordered_free)(struct btrfs_work *)) { work->func = func; work->ordered_func = ordered_func; @@ -260,8 +260,8 @@ void btrfs_init_work(struct btrfs_work_struct *work, work->flags = 0; } -static inline void __btrfs_queue_work(struct __btrfs_workqueue_struct *wq, - struct btrfs_work_struct *work) +static inline void __btrfs_queue_work(struct __btrfs_workqueue *wq, + struct btrfs_work *work) { unsigned long flags; @@ -275,10 +275,10 @@ static inline void __btrfs_queue_work(struct __btrfs_workqueue_struct *wq, queue_work(wq->normal_wq, &work->normal_work); } -void btrfs_queue_work(struct btrfs_workqueue_struct *wq, - struct btrfs_work_struct *work) +void btrfs_queue_work(struct btrfs_workqueue *wq, + struct btrfs_work *work) { - struct __btrfs_workqueue_struct *dest_wq; + struct __btrfs_workqueue *dest_wq; if (test_bit(WORK_HIGH_PRIO_BIT, &work->flags) && wq->high) dest_wq = wq->high; @@ -288,13 +288,13 @@ void btrfs_queue_work(struct btrfs_workqueue_struct *wq, } static inline void -__btrfs_destroy_workqueue(struct __btrfs_workqueue_struct *wq) +__btrfs_destroy_workqueue(struct __btrfs_workqueue *wq) { destroy_workqueue(wq->normal_wq); kfree(wq); } -void btrfs_destroy_workqueue(struct btrfs_workqueue_struct *wq) +void btrfs_destroy_workqueue(struct btrfs_workqueue *wq) { if (!wq) return; @@ -303,14 +303,14 @@ void btrfs_destroy_workqueue(struct btrfs_workqueue_struct *wq) __btrfs_destroy_workqueue(wq->normal); } -void btrfs_workqueue_set_max(struct btrfs_workqueue_struct *wq, int max) +void btrfs_workqueue_set_max(struct btrfs_workqueue *wq, int max) { wq->normal->max_active = max; if (wq->high) wq->high->max_active = max; } -void btrfs_set_work_high_priority(struct btrfs_work_struct *work) +void btrfs_set_work_high_priority(struct btrfs_work *work) { set_bit(WORK_HIGH_PRIO_BIT, &work->flags); } diff --git a/fs/btrfs/async-thread.h b/fs/btrfs/async-thread.h index 67bdde2..a5226f8 100644 --- a/fs/btrfs/async-thread.h +++ b/fs/btrfs/async-thread.h @@ -20,33 +20,33 @@ #ifndef __BTRFS_ASYNC_THREAD_ #define __BTRFS_ASYNC_THREAD_ -struct btrfs_workqueue_struct; +struct btrfs_workqueue; /* Internal use only */ -struct __btrfs_workqueue_struct; +struct __btrfs_workqueue; -struct btrfs_work_struct { - void (*func)(struct btrfs_work_struct *arg); - void (*ordered_func)(struct btrfs_work_struct *arg); - void (*ordered_free)(struct btrfs_work_struct *arg); +struct btrfs_work { + void (*func)(struct btrfs_work *arg); + void (*ordered_func)(struct btrfs_work *arg); + void (*ordered_free)(struct btrfs_work *arg); /* Don''t touch things below */ struct work_struct normal_work; struct list_head ordered_list; - struct __btrfs_workqueue_struct *wq; + struct __btrfs_workqueue *wq; unsigned long flags; }; -struct btrfs_workqueue_struct *btrfs_alloc_workqueue(char *name, +struct btrfs_workqueue *btrfs_alloc_workqueue(char *name, int flags, int max_active, int thresh); -void btrfs_init_work(struct btrfs_work_struct *work, - void (*func)(struct btrfs_work_struct *), - void (*ordered_func)(struct btrfs_work_struct *), - void (*ordered_free)(struct btrfs_work_struct *)); -void btrfs_queue_work(struct btrfs_workqueue_struct *wq, - struct btrfs_work_struct *work); -void btrfs_destroy_workqueue(struct btrfs_workqueue_struct *wq); -void btrfs_workqueue_set_max(struct btrfs_workqueue_struct *wq, int max); -void btrfs_set_work_high_priority(struct btrfs_work_struct *work); +void btrfs_init_work(struct btrfs_work *work, + void (*func)(struct btrfs_work *), + void (*ordered_func)(struct btrfs_work *), + void (*ordered_free)(struct btrfs_work *)); +void btrfs_queue_work(struct btrfs_workqueue *wq, + struct btrfs_work *work); +void btrfs_destroy_workqueue(struct btrfs_workqueue *wq); +void btrfs_workqueue_set_max(struct btrfs_workqueue *wq, int max); +void btrfs_set_work_high_priority(struct btrfs_work *work); #endif diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index f5f3de0..1d7f759 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1205,7 +1205,7 @@ struct btrfs_caching_control { struct list_head list; struct mutex mutex; wait_queue_head_t wait; - struct btrfs_work_struct work; + struct btrfs_work work; struct btrfs_block_group_cache *block_group; u64 progress; atomic_t count; @@ -1488,27 +1488,27 @@ struct btrfs_fs_info { * A third pool does submit_bio to avoid deadlocking with the other * two */ - struct btrfs_workqueue_struct *workers; - struct btrfs_workqueue_struct *delalloc_workers; - struct btrfs_workqueue_struct *flush_workers; - struct btrfs_workqueue_struct *endio_workers; - struct btrfs_workqueue_struct *endio_meta_workers; - struct btrfs_workqueue_struct *endio_raid56_workers; - struct btrfs_workqueue_struct *rmw_workers; - struct btrfs_workqueue_struct *endio_meta_write_workers; - struct btrfs_workqueue_struct *endio_write_workers; - struct btrfs_workqueue_struct *endio_freespace_worker; - struct btrfs_workqueue_struct *submit_workers; - struct btrfs_workqueue_struct *caching_workers; - struct btrfs_workqueue_struct *readahead_workers; + struct btrfs_workqueue *workers; + struct btrfs_workqueue *delalloc_workers; + struct btrfs_workqueue *flush_workers; + struct btrfs_workqueue *endio_workers; + struct btrfs_workqueue *endio_meta_workers; + struct btrfs_workqueue *endio_raid56_workers; + struct btrfs_workqueue *rmw_workers; + struct btrfs_workqueue *endio_meta_write_workers; + struct btrfs_workqueue *endio_write_workers; + struct btrfs_workqueue *endio_freespace_worker; + struct btrfs_workqueue *submit_workers; + struct btrfs_workqueue *caching_workers; + struct btrfs_workqueue *readahead_workers; /* * fixup workers take dirty pages that didn''t properly go through * the cow mechanism and make them safe to write. It happens * for the sys_munmap function call path */ - struct btrfs_workqueue_struct *fixup_workers; - struct btrfs_workqueue_struct *delayed_workers; + struct btrfs_workqueue *fixup_workers; + struct btrfs_workqueue *delayed_workers; struct task_struct *transaction_kthread; struct task_struct *cleaner_kthread; int thread_pool_size; @@ -1586,9 +1586,9 @@ struct btrfs_fs_info { atomic_t scrub_cancel_req; wait_queue_head_t scrub_pause_wait; int scrub_workers_refcnt; - struct btrfs_workqueue_struct *scrub_workers; - struct btrfs_workqueue_struct *scrub_wr_completion_workers; - struct btrfs_workqueue_struct *scrub_nocow_workers; + struct btrfs_workqueue *scrub_workers; + struct btrfs_workqueue *scrub_wr_completion_workers; + struct btrfs_workqueue *scrub_nocow_workers; #ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY u32 check_integrity_print_mask; @@ -1629,9 +1629,9 @@ struct btrfs_fs_info { /* qgroup rescan items */ struct mutex qgroup_rescan_lock; /* protects the progress item */ struct btrfs_key qgroup_rescan_progress; - struct btrfs_workqueue_struct *qgroup_rescan_workers; + struct btrfs_workqueue *qgroup_rescan_workers; struct completion qgroup_rescan_completion; - struct btrfs_work_struct qgroup_rescan_work; + struct btrfs_work qgroup_rescan_work; /* filesystem state */ unsigned long fs_state; @@ -3621,7 +3621,7 @@ struct btrfs_delalloc_work { int delay_iput; struct completion completion; struct list_head list; - struct btrfs_work_struct work; + struct btrfs_work work; }; struct btrfs_delalloc_work *btrfs_alloc_delalloc_work(struct inode *inode, diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c index e4ad5ea..433ea9d 100644 --- a/fs/btrfs/delayed-inode.c +++ b/fs/btrfs/delayed-inode.c @@ -1260,10 +1260,10 @@ void btrfs_remove_delayed_node(struct inode *inode) struct btrfs_async_delayed_work { struct btrfs_delayed_root *delayed_root; int nr; - struct btrfs_work_struct work; + struct btrfs_work work; }; -static void btrfs_async_run_delayed_root(struct btrfs_work_struct *work) +static void btrfs_async_run_delayed_root(struct btrfs_work *work) { struct btrfs_async_delayed_work *async_work; struct btrfs_delayed_root *delayed_root; diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 539378d..9a299e0 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -54,7 +54,7 @@ #endif static struct extent_io_ops btree_extent_io_ops; -static void end_workqueue_fn(struct btrfs_work_struct *work); +static void end_workqueue_fn(struct btrfs_work *work); static void free_fs_root(struct btrfs_root *root); static int btrfs_check_super_valid(struct btrfs_fs_info *fs_info, int read_only); @@ -85,7 +85,7 @@ struct end_io_wq { int error; int metadata; struct list_head list; - struct btrfs_work_struct work; + struct btrfs_work work; }; /* @@ -107,7 +107,7 @@ struct async_submit_bio { * can''t tell us where in the file the bio should go */ u64 bio_offset; - struct btrfs_work_struct work; + struct btrfs_work work; int error; }; @@ -745,7 +745,7 @@ unsigned long btrfs_async_submit_limit(struct btrfs_fs_info *info) return 256 * limit; } -static void run_one_async_start(struct btrfs_work_struct *work) +static void run_one_async_start(struct btrfs_work *work) { struct async_submit_bio *async; int ret; @@ -758,7 +758,7 @@ static void run_one_async_start(struct btrfs_work_struct *work) async->error = ret; } -static void run_one_async_done(struct btrfs_work_struct *work) +static void run_one_async_done(struct btrfs_work *work) { struct btrfs_fs_info *fs_info; struct async_submit_bio *async; @@ -785,7 +785,7 @@ static void run_one_async_done(struct btrfs_work_struct *work) async->bio_offset); } -static void run_one_async_free(struct btrfs_work_struct *work) +static void run_one_async_free(struct btrfs_work *work) { struct async_submit_bio *async; @@ -1677,7 +1677,7 @@ static int setup_bdi(struct btrfs_fs_info *info, struct backing_dev_info *bdi) * called by the kthread helper functions to finally call the bio end_io * functions. This is where read checksum verification actually happens */ -static void end_workqueue_fn(struct btrfs_work_struct *work) +static void end_workqueue_fn(struct btrfs_work *work) { struct bio *bio; struct end_io_wq *end_io_wq; diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index 80ecc14..bae2a52 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -377,7 +377,7 @@ static u64 add_new_free_space(struct btrfs_block_group_cache *block_group, return total_added; } -static noinline void caching_thread(struct btrfs_work_struct *work) +static noinline void caching_thread(struct btrfs_work *work) { struct btrfs_block_group_cache *block_group; struct btrfs_fs_info *fs_info; diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 62e4fc2..3b8cfb1 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -305,7 +305,7 @@ struct async_cow { u64 start; u64 end; struct list_head extents; - struct btrfs_work_struct work; + struct btrfs_work work; }; static noinline int add_async_extent(struct async_cow *cow, @@ -980,7 +980,7 @@ out_unlock: /* * work queue call back to started compression on a file and pages */ -static noinline void async_cow_start(struct btrfs_work_struct *work) +static noinline void async_cow_start(struct btrfs_work *work) { struct async_cow *async_cow; int num_added = 0; @@ -998,7 +998,7 @@ static noinline void async_cow_start(struct btrfs_work_struct *work) /* * work queue call back to submit previously compressed pages */ -static noinline void async_cow_submit(struct btrfs_work_struct *work) +static noinline void async_cow_submit(struct btrfs_work *work) { struct async_cow *async_cow; struct btrfs_root *root; @@ -1019,7 +1019,7 @@ static noinline void async_cow_submit(struct btrfs_work_struct *work) submit_compressed_extents(async_cow->inode, async_cow); } -static noinline void async_cow_free(struct btrfs_work_struct *work) +static noinline void async_cow_free(struct btrfs_work *work) { struct async_cow *async_cow; async_cow = container_of(work, struct async_cow, work); @@ -1727,10 +1727,10 @@ int btrfs_set_extent_delalloc(struct inode *inode, u64 start, u64 end, /* see btrfs_writepage_start_hook for details on why this is required */ struct btrfs_writepage_fixup { struct page *page; - struct btrfs_work_struct work; + struct btrfs_work work; }; -static void btrfs_writepage_fixup_worker(struct btrfs_work_struct *work) +static void btrfs_writepage_fixup_worker(struct btrfs_work *work) { struct btrfs_writepage_fixup *fixup; struct btrfs_ordered_extent *ordered; @@ -2725,7 +2725,7 @@ out: return ret; } -static void finish_ordered_fn(struct btrfs_work_struct *work) +static void finish_ordered_fn(struct btrfs_work *work) { struct btrfs_ordered_extent *ordered_extent; ordered_extent = container_of(work, struct btrfs_ordered_extent, work); @@ -2738,7 +2738,7 @@ static int btrfs_writepage_end_io_hook(struct page *page, u64 start, u64 end, struct inode *inode = page->mapping->host; struct btrfs_root *root = BTRFS_I(inode)->root; struct btrfs_ordered_extent *ordered_extent = NULL; - struct btrfs_workqueue_struct *workers; + struct btrfs_workqueue *workers; trace_btrfs_writepage_end_io_hook(page, start, end, uptodate); @@ -8161,7 +8161,7 @@ out_notrans: return ret; } -static void btrfs_run_delalloc_work(struct btrfs_work_struct *work) +static void btrfs_run_delalloc_work(struct btrfs_work *work) { struct btrfs_delalloc_work *delalloc_work; struct inode *inode; diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c index e0c3cf0..2ec40b3 100644 --- a/fs/btrfs/ordered-data.c +++ b/fs/btrfs/ordered-data.c @@ -552,7 +552,7 @@ void btrfs_remove_ordered_extent(struct inode *inode, wake_up(&entry->wait); } -static void btrfs_run_ordered_extent_work(struct btrfs_work_struct *work) +static void btrfs_run_ordered_extent_work(struct btrfs_work *work) { struct btrfs_ordered_extent *ordered; diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h index 9d88f37..9b0450f 100644 --- a/fs/btrfs/ordered-data.h +++ b/fs/btrfs/ordered-data.h @@ -130,10 +130,10 @@ struct btrfs_ordered_extent { /* a per root list of all the pending ordered extents */ struct list_head root_extent_list; - struct btrfs_work_struct work; + struct btrfs_work work; struct completion completion; - struct btrfs_work_struct flush_work; + struct btrfs_work flush_work; struct list_head work_list; }; diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c index 521144e..f653ffd 100644 --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -1981,7 +1981,7 @@ out: return ret; } -static void btrfs_qgroup_rescan_worker(struct btrfs_work_struct *work) +static void btrfs_qgroup_rescan_worker(struct btrfs_work *work) { struct btrfs_fs_info *fs_info = container_of(work, struct btrfs_fs_info, qgroup_rescan_work); diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c index 5afa564..1269fc3 100644 --- a/fs/btrfs/raid56.c +++ b/fs/btrfs/raid56.c @@ -87,7 +87,7 @@ struct btrfs_raid_bio { /* * for scheduling work in the helper threads */ - struct btrfs_work_struct work; + struct btrfs_work work; /* * bio list and bio_list_lock are used @@ -166,8 +166,8 @@ struct btrfs_raid_bio { static int __raid56_parity_recover(struct btrfs_raid_bio *rbio); static noinline void finish_rmw(struct btrfs_raid_bio *rbio); -static void rmw_work(struct btrfs_work_struct *work); -static void read_rebuild_work(struct btrfs_work_struct *work); +static void rmw_work(struct btrfs_work *work); +static void read_rebuild_work(struct btrfs_work *work); static void async_rmw_stripe(struct btrfs_raid_bio *rbio); static void async_read_rebuild(struct btrfs_raid_bio *rbio); static int fail_bio_stripe(struct btrfs_raid_bio *rbio, struct bio *bio); @@ -1588,7 +1588,7 @@ struct btrfs_plug_cb { struct blk_plug_cb cb; struct btrfs_fs_info *info; struct list_head rbio_list; - struct btrfs_work_struct work; + struct btrfs_work work; }; /* @@ -1652,7 +1652,7 @@ static void run_plug(struct btrfs_plug_cb *plug) * if the unplug comes from schedule, we have to push the * work off to a helper thread */ -static void unplug_work(struct btrfs_work_struct *work) +static void unplug_work(struct btrfs_work *work) { struct btrfs_plug_cb *plug; plug = container_of(work, struct btrfs_plug_cb, work); @@ -2079,7 +2079,7 @@ int raid56_parity_recover(struct btrfs_root *root, struct bio *bio, } -static void rmw_work(struct btrfs_work_struct *work) +static void rmw_work(struct btrfs_work *work) { struct btrfs_raid_bio *rbio; @@ -2087,7 +2087,7 @@ static void rmw_work(struct btrfs_work_struct *work) raid56_rmw_stripe(rbio); } -static void read_rebuild_work(struct btrfs_work_struct *work) +static void read_rebuild_work(struct btrfs_work *work) { struct btrfs_raid_bio *rbio; diff --git a/fs/btrfs/reada.c b/fs/btrfs/reada.c index 854b69a..84e10dd 100644 --- a/fs/btrfs/reada.c +++ b/fs/btrfs/reada.c @@ -91,8 +91,7 @@ struct reada_zone { }; struct reada_machine_work { - struct btrfs_work_struct - work; + struct btrfs_work work; struct btrfs_fs_info *fs_info; }; @@ -733,7 +732,7 @@ static int reada_start_machine_dev(struct btrfs_fs_info *fs_info, } -static void reada_start_machine_worker(struct btrfs_work_struct *work) +static void reada_start_machine_worker(struct btrfs_work *work) { struct reada_machine_work *rmw; struct btrfs_fs_info *fs_info; diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 1618d6d..a224575 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -96,8 +96,7 @@ struct scrub_bio { #endif int page_count; int next_free; - struct btrfs_work_struct - work; + struct btrfs_work work; }; struct scrub_block { @@ -155,8 +154,7 @@ struct scrub_fixup_nodatasum { struct btrfs_device *dev; u64 logical; struct btrfs_root *root; - struct btrfs_work_struct - work; + struct btrfs_work work; int mirror_num; }; @@ -174,8 +172,7 @@ struct scrub_copy_nocow_ctx { int mirror_num; u64 physical_for_dev_replace; struct list_head inodes; - struct btrfs_work_struct - work; + struct btrfs_work work; }; struct scrub_warning { @@ -235,7 +232,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len, u64 gen, int mirror_num, u8 *csum, int force, u64 physical_for_dev_replace); static void scrub_bio_end_io(struct bio *bio, int err); -static void scrub_bio_end_io_worker(struct btrfs_work_struct *work); +static void scrub_bio_end_io_worker(struct btrfs_work *work); static void scrub_block_complete(struct scrub_block *sblock); static void scrub_remap_extent(struct btrfs_fs_info *fs_info, u64 extent_logical, u64 extent_len, @@ -252,14 +249,14 @@ static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx, struct scrub_page *spage); static void scrub_wr_submit(struct scrub_ctx *sctx); static void scrub_wr_bio_end_io(struct bio *bio, int err); -static void scrub_wr_bio_end_io_worker(struct btrfs_work_struct *work); +static void scrub_wr_bio_end_io_worker(struct btrfs_work *work); static int write_page_nocow(struct scrub_ctx *sctx, u64 physical_for_dev_replace, struct page *page); static int copy_nocow_pages_for_inode(u64 inum, u64 offset, u64 root, struct scrub_copy_nocow_ctx *ctx); static int copy_nocow_pages(struct scrub_ctx *sctx, u64 logical, u64 len, int mirror_num, u64 physical_for_dev_replace); -static void copy_nocow_pages_worker(struct btrfs_work_struct *work); +static void copy_nocow_pages_worker(struct btrfs_work *work); static void scrub_pending_bio_inc(struct scrub_ctx *sctx) @@ -703,7 +700,7 @@ out: return -EIO; } -static void scrub_fixup_nodatasum(struct btrfs_work_struct *work) +static void scrub_fixup_nodatasum(struct btrfs_work *work) { int ret; struct scrub_fixup_nodatasum *fixup; @@ -1608,7 +1605,7 @@ static void scrub_wr_bio_end_io(struct bio *bio, int err) btrfs_queue_work(fs_info->scrub_wr_completion_workers, &sbio->work); } -static void scrub_wr_bio_end_io_worker(struct btrfs_work_struct *work) +static void scrub_wr_bio_end_io_worker(struct btrfs_work *work) { struct scrub_bio *sbio = container_of(work, struct scrub_bio, work); struct scrub_ctx *sctx = sbio->sctx; @@ -2076,7 +2073,7 @@ static void scrub_bio_end_io(struct bio *bio, int err) btrfs_queue_work(fs_info->scrub_workers, &sbio->work); } -static void scrub_bio_end_io_worker(struct btrfs_work_struct *work) +static void scrub_bio_end_io_worker(struct btrfs_work *work) { struct scrub_bio *sbio = container_of(work, struct scrub_bio, work); struct scrub_ctx *sctx = sbio->sctx; @@ -3157,7 +3154,7 @@ static int record_inode_for_nocow(u64 inum, u64 offset, u64 root, void *ctx) #define COPY_COMPLETE 1 -static void copy_nocow_pages_worker(struct btrfs_work_struct *work) +static void copy_nocow_pages_worker(struct btrfs_work *work) { struct scrub_copy_nocow_ctx *nocow_ctx container_of(work, struct scrub_copy_nocow_ctx, work); diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index e07bd64..9ac6d28 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -440,7 +440,7 @@ done: blk_finish_plug(&plug); } -static void pending_bios_fn(struct btrfs_work_struct *work) +static void pending_bios_fn(struct btrfs_work *work) { struct btrfs_device *device; diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h index 60802e2..8b3cd14 100644 --- a/fs/btrfs/volumes.h +++ b/fs/btrfs/volumes.h @@ -95,7 +95,7 @@ struct btrfs_device { /* per-device scrub information */ struct scrub_ctx *scrub_device; - struct btrfs_work_struct work; + struct btrfs_work work; struct rcu_head rcu; struct work_struct rcu_work; -- 1.8.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html