search for: btrfs_queue_worker

Displaying 20 results from an estimated 26 matches for "btrfs_queue_worker".

2012 Oct 30
8
Crashes in extent_io.c after "btrfs bad mapping eb" notice
...fffff8126bb26>] ? btrfs_finish_ordered_io+0x2e1/0x332 Oct 30 22:42:36 localhost kernel: [<ffffffff8103cfae>] ? run_timer_softirq+0x2c2/0x2c2 Oct 30 22:42:36 localhost kernel: [<ffffffff81286398>] ? worker_loop+0x174/0x4c0 Oct 30 22:42:36 localhost kernel: [<ffffffff81286224>] ? btrfs_queue_worker+0x261/0x261 Oct 30 22:42:36 localhost kernel: [<ffffffff81286224>] ? btrfs_queue_worker+0x261/0x261 Oct 30 22:42:36 localhost kernel: [<ffffffff8104b293>] ? kthread+0x81/0x89 Oct 30 22:42:36 localhost kernel: [<ffffffff815de774>] ? kernel_thread_helper+0x4/0x10 Oct 30 22:42:36 loc...
2012 Nov 22
0
raid10 data fs full after degraded mount
...[ 2898.699783] [<ffffffffa01a25c4>] ? add_pending_csums.isra.40+0x33/0x4e [btrfs] [ 2898.699799] [<ffffffffa01a74ae>] ? btrfs_finish_ordered_io+0x1e8/0x303 [btrfs] [ 2898.699816] [<ffffffffa01bfc70>] ? worker_loop+0x175/0x4ad [btrfs] [ 2898.699832] [<ffffffffa01bfafb>] ? btrfs_queue_worker+0x27a/0x27a [btrfs] [ 2898.699846] [<ffffffffa01bfafb>] ? btrfs_queue_worker+0x27a/0x27a [btrfs] [ 2898.699850] [<ffffffff81056d28>] ? kthread+0x81/0x89 [ 2898.699854] [<ffffffff81056ca7>] ? __kthread_parkme+0x5c/0x5c [ 2898.699859] [<ffffffff81373f7c>] ? ret_from_fork+0...
2011 Dec 07
3
WARNING: at fs/btrfs/extent-tree.c:4754 followed by BUG: unable to handle kernel NULL pointer dereference at (null)
...ansaction+0x90/0x1dd [172816.293024] [<ffffffff810273b0>] ? should_resched+0x5/0x24 [172816.293027] [<ffffffff81166981>] ? btrfs_async_run_delayed_node_done+0x16c/0x1ca [172816.293029] [<ffffffff8114f20f>] ? worker_loop+0x170/0x46d [172816.293031] [<ffffffff8114f09f>] ? btrfs_queue_worker+0x25b/0x25b [172816.293033] [<ffffffff8114f09f>] ? btrfs_queue_worker+0x25b/0x25b [172816.293036] [<ffffffff8104883b>] ? kthread+0x7a/0x82 [172816.293040] [<ffffffff81415af4>] ? kernel_thread_helper+0x4/0x10 [172816.293042] [<ffffffff810487c1>] ? kthread_worker_fn+0x135/...
2013 Jul 03
1
WARNING: at fs/btrfs/backref.c:903 find_parent_nodes+0x616/0x815 [btrfs]()
...[btrfs] Jul 2 21:43:27 bkp010 kernel: [ 696.777648] [<ffffffffa07113ff>] finish_ordered_fn+0x10/0x12 [btrfs] Jul 2 21:43:27 bkp010 kernel: [ 696.777787] [<ffffffffa072ab5c>] worker_loop+0x15e/0x48e [btrfs] Jul 2 21:43:27 bkp010 kernel: [ 696.777917] [<ffffffffa072a9fe>] ? btrfs_queue_worker+0x267/0x267 [btrfs] Jul 2 21:43:27 bkp010 kernel: [ 696.778046] [<ffffffff81048ab2>] kthread+0xb5/0xbd Jul 2 21:43:27 bkp010 kernel: [ 696.778172] [<ffffffff810489fd>] ? kthread_freezable_should_stop+0x43/0x43 Jul 2 21:43:27 bkp010 kernel: [ 696.778294] [<ffffffff8137576c&gt...
2012 May 22
1
warnings met in introduce extent buffer cache for each i-node patch
...t;] bio_endio+0x1d/0x40 May 22 09:23:57 bigbox kernel: [56455.532869] [<ffffffff812d9846>] end_workqueue_fn+0x56/0x140 May 22 09:23:57 bigbox kernel: [56455.532886] [<ffffffff8130aeb8>] worker_loop+0x148/0x580 May 22 09:23:57 bigbox kernel: [56455.532898] [<ffffffff8130ad70>] ? btrfs_queue_worker+0x2e0/0x2e0 May 22 09:23:57 bigbox kernel: [56455.532915] [<ffffffff81076473>] kthread+0x93/0xa0 May 22 09:23:57 bigbox kernel: [56455.532929] [<ffffffff81678be4>] kernel_thread_helper+0x4/0x10 May 22 09:23:57 bigbox kernel: [56455.532944] [<fffffffff3/0x13 May 22 09:23:57 bigbox...
2010 Mar 12
2
[PATCH] Btrfs: force delalloc flushing when things get desperate
...wait = false; @@ -2939,7 +2945,7 @@ static void flush_delalloc(struct btrfs_root *root, spin_unlock(&info->lock); if (wait) { - wait_on_flush(root, info); + wait_on_flush(root, info, soft); return; } @@ -2953,7 +2959,7 @@ static void flush_delalloc(struct btrfs_root *root, btrfs_queue_worker(&root->fs_info->enospc_workers, &async->work); - wait_on_flush(root, info); + wait_on_flush(root, info, soft); return; flush: @@ -3146,14 +3152,17 @@ again: if (!delalloc_flushed) { delalloc_flushed = true; - flush_delalloc(root, meta_sinfo); + flush_delalloc(r...
2013 Feb 13
0
Re: Heavy memory leak when using quota groups
...xa0/0xa0 > [ 5123.800384] [<ffffffffa0549935>] finish_ordered_fn+0x15/0x20 [btrfs] > [ 5123.800394] [<ffffffffa056ac2f>] worker_loop+0x16f/0x5d0 [btrfs] > [ 5123.800401] [<ffffffff810888a8>] ? __wake_up_common+0x58/0x90 > [ 5123.800411] [<ffffffffa056aac0>] ? btrfs_queue_worker+0x310/0x310 [btrfs] > [ 5123.800415] [<ffffffff8107f080>] kthread+0xc0/0xd0 > [ 5123.800417] [<ffffffff8107efc0>] ? flush_kthread_worker+0xb0/0xb0 > [ 5123.800423] [<ffffffff816f452c>] ret_from_fork+0x7c/0xb0 > [ 5123.800425] [<ffffffff8107efc0>] ? flush_kthr...
2013 Jan 21
1
btrfs_start_delalloc_inodes livelocks when creating snapshot under IO
Greetings all, I see the following issue during snap creation under IO: Transaction commit calls btrfs_start_delalloc_inodes() that locks the delalloc_inodes list, fetches the first inode, unlocks the list, triggers btrfs_alloc_delalloc_work/btrfs_queue_worker for this inode and then locks the list again. Then it checks the head of the list again. In my case, this is always exactly the same inode. As a result, this function allocates a huge amount of btrfs_delalloc_work structures, and I start seeing OOM messages in the kernel log, killing processes etc....
2012 Dec 12
1
kernel BUG at fs/btrfs/extent_io.c:4052 (kernel 3.5.3)
...[<ffffffff8104ebd0>] ? usleep_range+0x40/0x40 Dec 11 17:49:04 SANOS1 kernel: [<ffffffffa0243660>] finish_ordered_fn+0x10/0x20 [btrfs] Dec 11 17:49:04 SANOS1 kernel: [<ffffffffa026cd97>] worker_loop+0x157/0x550 [btrfs] Dec 11 17:49:04 SANOS1 kernel: [<ffffffffa026cc40>] ? btrfs_queue_worker+0x310/0x310 [btrfs] Dec 11 17:49:04 SANOS1 kernel: [<ffffffff81061bde>] kthread+0x8e/0xa0 Dec 11 17:49:04 SANOS1 kernel: [<ffffffff81590594>] kernel_thread_helper+0x4/0x10 Dec 11 17:49:04 SANOS1 kernel: [<ffffffff81061b50>] ? flush_kthread_worker+0x70/0x70 Dec 11 17:49:04 SANOS...
2013 Sep 23
6
btrfs: qgroup scan failed with -12
...2.676441] [<ffffffffa036d6e1>] ? qgroup_account_ref_step1+0xea/0x102 [btrfs] [1878432.676542] [<ffffffffa036d915>] btrfs_qgroup_rescan_worker+0x21c/0x516 [btrfs] [1878432.676645] [<ffffffffa03482cc>] worker_loop+0x15e/0x48e [btrfs] [1878432.676702] [<ffffffffa034816e>] ? btrfs_queue_worker+0x267/0x267 [btrfs] [1878432.676757] [<ffffffff8104e51a>] kthread+0xb5/0xbd [1878432.676809] [<ffffffff8104e465>] ? kthread_freezable_should_stop+0x43/0x43 [1878432.676881] [<ffffffff8137da2c>] ret_from_fork+0x7c/0xb0 [1878432.676950] [<ffffffff8104e465>] ? kthread_freez...
2012 Aug 01
7
[PATCH] Btrfs: barrier before waitqueue_active
We need an smb_mb() before waitqueue_active to avoid missing wakeups. Before Mitch was hitting a deadlock between the ordered flushers and the transaction commit because the ordered flushers were waiting for more refs and were never woken up, so those smp_mb()''s are the most important. Everything else I added for correctness sake and to avoid getting bitten by this again somewhere else.
2011 Aug 09
17
Re: Applications using fsync cause hangs for several seconds every few minutes
On 06/21/2011 01:15 PM, Jan Stilow wrote: > Hello, > > Nirbheek Chauhan <nirbheek <at> gentoo.org> writes: >> [...] >> >> Every few minutes, (I guess) when applications do fsync (firefox, >> xchat, vim, etc), all applications that use fsync() hang for several >> seconds, and applications that use general IO suffer extreme >> slowdowns.
2013 Apr 13
0
btrfs crash (and softlockup btrfs-endio-wri)
...54 datastore01 kernel: [1210991.342274] [<ffffffffa03c5f85>] finish_ordered_fn+0x15/0x20 [btrfs] Apr 13 04:05:54 datastore01 kernel: [1210991.342294] [<ffffffffa03e6c56>] worker_loop+0x136/0x580 [btrfs] Apr 13 04:05:54 datastore01 kernel: [1210991.342313] [<ffffffffa03e6b20>] ? btrfs_queue_worker+0x300/0x300 [btrfs] Apr 13 04:05:54 datastore01 kernel: [1210991.342321] [<ffffffff81081c30>] kthread+0xc0/0xd0 Apr 13 04:05:54 datastore01 kernel: [1210991.342328] [<ffffffff81010000>] ? ftrace_define_fields_xen_mc_entry+0xa0/0xf0 Apr 13 04:05:54 datastore01 kernel: [1210991.342332]...
2011 Oct 04
68
[patch 00/65] Error handling patchset v3
Hi all - Here''s my current error handling patchset, against 3.1-rc8. Almost all of this patchset is preparing for actual error handling. Before we start in on that work, I''m trying to reduce the surface we need to worry about. It turns out that there is a ton of code that returns an error code but never actually reports an error. The patchset has grown to 65 patches. 46 of them
2013 Feb 02
5
Oops when mounting btrfs partition
...fs] Jan 21 16:35:40 localhost kernel: [1655047.752921] [<ffffffffa01c688f>] worker_loop+0x15f/0x5a0 [btrfs] Jan 21 16:35:40 localhost kernel: [1655047.752923] [<ffffffff8167ed2f>] ? __schedule+0x3cf/0x7c0 Jan 21 16:35:40 localhost kernel: [1655047.752937] [<ffffffffa01c6730>] ? btrfs_queue_worker+0x330/0x330 [btrfs] Jan 21 16:35:40 localhost kernel: [1655047.752941] [<ffffffff81076203>] kthread+0x93/0xa0 Jan 21 16:35:40 localhost kernel: [1655047.752943] [<ffffffff816898e4>] kernel_thread_helper+0x4/0x10 Jan 21 16:35:40 localhost kernel: [1655047.752946] [<ffffffff81076170...
2012 Dec 18
0
[PATCH] [RFC] Btrfs: Subpagesize blocksize (WIP).
...start + 1, uptodate)) - return 0; - - ordered_extent->work.func = finish_ordered_fn; - ordered_extent->work.flags = 0; - - if (btrfs_is_free_space_inode(inode)) - workers = &root->fs_info->endio_freespace_worker; - else - workers = &root->fs_info->endio_write_workers; - btrfs_queue_worker(workers, &ordered_extent->work); +next_block: + if (btrfs_dec_test_ordered_pending(inode, &ordered_extent, start, + io_size, uptodate)) { + ordered_extent->work.func = finish_ordered_fn; + ordered_extent->work.flags = 0; + + if (btrfs_is_free_space_inode(inode)) + work...
2012 Sep 17
13
[PATCH 1/2 v3] Btrfs: use flag EXTENT_DEFRAG for snapshot-aware defrag
We''re going to use this flag EXTENT_DEFRAG to indicate which range belongs to defragment so that we can implement snapshow-aware defrag: We set the EXTENT_DEFRAG flag when dirtying the extents that need defragmented, so later on writeback thread can differentiate between normal writeback and writeback started by defragmentation. This patch is used for the latter one. Originally patch
2010 Mar 02
3
2.6.33 high cpu usage
With the ATI bug I was hitting earlier fixed, only my btrfs partition continues to show high cpu usage for some operations. Rsync, git pull, git checkout and svn up are typicall operations which trigger the high cpu usage. As an example, this perf report is from using git checkout to change to a new branch; the change needed to checkout 208 files out of about 1600 total files. du(1) reports
2013 Apr 25
10
[PATCH v4 0/3] Btrfs: quota rescan for 3.10
The kernel side for rescan, which is needed if you want to enable qgroup tracking on a non-empty volume. The first patch splits btrfs_qgroup_account_ref into readable ans reusable units. The second patch adds the rescan implementation (refer to its commit message for a description of the algorithm). The third patch starts an automatic rescan when qgroups are enabled. It is only separated to
2011 Aug 26
0
[PATCH] Btrfs: make some functions return void
...t;lock, flags); -out: - - return 0; } void btrfs_set_work_high_prio(struct btrfs_work *work) diff --git a/fs/btrfs/async-thread.h b/fs/btrfs/async-thread.h index 5077746..6a9d3c1 100644 --- a/fs/btrfs/async-thread.h +++ b/fs/btrfs/async-thread.h @@ -111,9 +111,9 @@ struct btrfs_workers { int btrfs_queue_worker(struct btrfs_workers *workers, struct btrfs_work *work); int btrfs_start_workers(struct btrfs_workers *workers, int num_workers); -int btrfs_stop_workers(struct btrfs_workers *workers); +void btrfs_stop_workers(struct btrfs_workers *workers); void btrfs_init_workers(struct btrfs_workers *workers,...