search for: worker_thread

Displaying 20 results from an estimated 295 matches for "worker_thread".

2014 Sep 04
1
Kernel errors after updating
...52a36e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff810d24d0>] ? do_rebuild_sched_domains+0x0/0x50 [<ffffffff8152a20b>] mutex_lock+0x2b/0x50 [<ffffffff810c97b5>] cgroup_lock+0x15/0x20 [<ffffffff810d24e8>] do_rebuild_sched_domains+0x18/0x50 [<ffffffff81094a20>] worker_thread+0x170/0x2a0 [<ffffffff8109afa0>] ? autoremove_wake_function+0x0/0x40 [<ffffffff810948b0>] ? worker_thread+0x0/0x2a0 [<ffffffff8109abf6>] kthread+0x96/0xa0 [<ffffffff8100c20a>] child_rip+0xa/0x20 [<ffffffff8109ab60>] ? kthread+0x0/0xa0 [<ffffffff8100c200>] ?...
2009 Apr 15
1
hang with fsdlm
...hree nodes, the test hangs, and I collect the following information: bull-01 ------- 3053 S< [ocfs2dc] ocfs2_downconvert_thread 3054 S< [dlm_astd] dlm_astd 3055 S< [dlm_scand] dlm_scand 3056 S< [dlm_recv/0] worker_thread 3057 S< [dlm_recv/1] worker_thread 3058 S< [dlm_recv/2] worker_thread 3059 S< [dlm_recv/3] worker_thread 3060 S< [dlm_send] worker_thread 3061 S< [dlm_recoverd] dlm_recoverd 3067 S< [kjour...
2008 Jul 24
4
umount oops
...cleaner+0xf0/0x110 Jul 24 22:44:54 minerva kernel: [ 1532.887479] [btrfs:btrfs_transaction_cleaner+0x0/0x110] :btrfs:btrfs_transaction_cleaner+0x0/0x110 Jul 24 22:44:54 minerva kernel: [ 1532.887543] [run_workqueue+0xcc/0x170] run_workqueue+0xcc/0x170 Jul 24 22:44:54 minerva kernel: [ 1532.887601] [worker_thread+0x0/0x110] worker_thread+0x0/0x110 Jul 24 22:44:54 minerva kernel: [ 1532.887661] [worker_thread+0x0/0x110] worker_thread+0x0/0x110 Jul 24 22:44:54 minerva kernel: [ 1532.887720] [worker_thread+0xa3/0x110] worker_thread+0xa3/0x110 Jul 24 22:44:54 minerva kernel: [ 1532.887780] [<ffffffff80253a0...
2013 Nov 01
2
5.10, crashes
...>] :scsi_mod:scsi_request_fn+0x6a/0x392 Nov 1 14:34:22 <server> kernel: [<ffffffff8005abd2>] generic_unplug_device+0x22/0x32 Nov 1 14:34:22 <server> kernel: [<ffffffff8004d957>] run_workqueue+0x9e/0xfb Nov 1 14:34:22 <server> kernel: [<ffffffff8004a1aa>] worker_thread+0x0/0x122 Nov 1 14:34:22 <server> kernel: [<ffffffff800a3d4a>] keventd_create_kthread+0x0/0xc4 Nov 1 14:34:22 <server> kernel: [<ffffffff8004a29a>] worker_thread+0xf0/0x122 Nov 1 14:34:22 <server> kernel: [<ffffffff8008f4a9>] default_wake_function+0x0/0xe N...
2010 Nov 19
1
Btrfs_truncate ?
..._wake_function+0x0/0x2a [69374.648273] [<ffffffffa01495a1>] ? do_async_commit+0x0/0x1b [btrfs] [69374.648282] [<ffffffffa01495b3>] ? do_async_commit+0x12/0x1b [btrfs] [69374.648285] [<ffffffff8105b3e4>] ? process_one_work+0x1d1/0x2ee [69374.648287] [<ffffffff8105ce09>] ? worker_thread+0x12d/0x247 [69374.648289] [<ffffffff8105ccdc>] ? worker_thread+0x0/0x247 [69374.648291] [<ffffffff8105ccdc>] ? worker_thread+0x0/0x247 [69374.648293] [<ffffffff8105fcac>] ? kthread+0x7a/0x82 [69374.648297] [<ffffffff8100a8a4>] ? kernel_thread_helper+0x4/0x10 [69374.6482...
2010 Apr 29
2
Hardware error or ocfs2 error?
...ocfs2] Apr 29 11:01:18 node06 kernel: [2569440.616669] [<ffffffff810fdfa8>] ? iput+0x27/0x60 Apr 29 11:01:18 node06 kernel: [2569440.616689] [<ffffffffa0fd0a8f>] ? ocfs2_complete_recovery+0x82b/0xa3f [ocfs2] Apr 29 11:01:18 node06 kernel: [2569440.616715] [<ffffffff8106144b>] ? worker_thread+0x188/0x21d Apr 29 11:01:18 node06 kernel: [2569440.616736] [<ffffffffa0fd0264>] ? ocfs2_complete_recovery+0x0/0xa3f [ocfs2] Apr 29 11:01:18 node06 kernel: [2569440.616761] [<ffffffff81064a36>] ? autoremove_wake_function+0x0/0x2e Apr 29 11:01:18 node06 kernel: [2569440.616778] [<f...
2010 Jan 09
2
[TTM] general protection fault in ttm_tt_swapout, to_virtual looks screwed up
...00000 <0> ffff88001f4be844 ffff88003edb95d8 ffff88003749fdb0 ffffffffa017d550 Call Trace: [<ffffffffa017d550>] ttm_bo_swapout+0x1df/0x222 [ttm] [<ffffffffa017b38d>] ttm_shrink+0x9b/0xc0 [ttm] [<ffffffffa017b3c6>] ttm_shrink_work+0x14/0x16 [ttm] [<ffffffff810489c7>] worker_thread+0x1b7/0x25e [<ffffffffa017b3b2>] ? ttm_shrink_work+0x0/0x16 [ttm] [<ffffffff8104bd8a>] ? autoremove_wake_function+0x0/0x38 [<ffffffff81048810>] ? worker_thread+0x0/0x25e [<ffffffff8104ba75>] kthread+0x7c/0x84 [<ffffffff8100bd6a>] child_rip+0xa/0x20 [<ffffffffa...
2011 Feb 25
2
Bug inkvm_set_irq
...5.246123] [<ffffffffa041bc30>] ? irqfd_inject+0x0/0x50 [kvm] Feb 23 13:56:19 ayrshire.u06.univ-nantes.prive kernel: [ 685.246123] [<ffffffff8106b6f2>] ? process_one_work+0x112/0x460 Feb 23 13:56:19 ayrshire.u06.univ-nantes.prive kernel: [ 685.246123] [<ffffffff8106be25>] ? worker_thread+0x145/0x410 Feb 23 13:56:19 ayrshire.u06.univ-nantes.prive kernel: [ 685.246123] [<ffffffff8103a3d0>] ? __wake_up_common+0x50/0x80 Feb 23 13:56:19 ayrshire.u06.univ-nantes.prive kernel: [ 685.246123] [<ffffffff8106bce0>] ? worker_thread+0x0/0x410 Feb 23 13:56:19 ayrshire.u06.univ-...
2011 Feb 25
2
Bug inkvm_set_irq
...5.246123] [<ffffffffa041bc30>] ? irqfd_inject+0x0/0x50 [kvm] Feb 23 13:56:19 ayrshire.u06.univ-nantes.prive kernel: [ 685.246123] [<ffffffff8106b6f2>] ? process_one_work+0x112/0x460 Feb 23 13:56:19 ayrshire.u06.univ-nantes.prive kernel: [ 685.246123] [<ffffffff8106be25>] ? worker_thread+0x145/0x410 Feb 23 13:56:19 ayrshire.u06.univ-nantes.prive kernel: [ 685.246123] [<ffffffff8103a3d0>] ? __wake_up_common+0x50/0x80 Feb 23 13:56:19 ayrshire.u06.univ-nantes.prive kernel: [ 685.246123] [<ffffffff8106bce0>] ? worker_thread+0x0/0x410 Feb 23 13:56:19 ayrshire.u06.univ-...
2010 Aug 06
7
[GIT PULL] devel/pat + devel/kms.fixes-0.5
Hey Jeremy, Please pull from devel/pat (based off your xen/dom0/core tree) which has one patch: Konrad Rzeszutek Wilk (1): xen/pat: make pte_flags(x) a pvops function. which is neccessary for the drivers/gpu/drm/radeon driver to work properly with AGP based cards (which look to be the only ones that try to set WC on pages). Also please pull from devel/kms.fixes-05 (based off your
2006 Jul 28
3
Private Interconnect and self fencing
I have an OCFS2 filesystem on a coraid AOE device. It mounts fine, but with heavy I/O the server self fences claiming a write timeout: (16,2):o2hb_write_timeout:164 ERROR: Heartbeat write timeout to device etherd/e0.1p1 after 12000 milliseconds (16,2):o2hb_stop_all_regions:1789 ERROR: stopping heartbeat on all active regions. Kernel panic - not syncing: ocfs2 is very sorry to be fencing this
2014 Jan 30
2
CentOS 6.5: NFS server crashes with list_add corruption errors
...12944ed>] ? __list_add+0x6d/0xa0 Jan 30 09:46:13 qb-storage kernel: [<ffffffffa05bd60a>] ? laundromat_main+0x23a/0x3f0 [nfsd] Jan 30 09:46:13 qb-storage kernel: [<ffffffffa05bd3d0>] ? laundromat_main+0x0/0x3f0 [nfsd] Jan 30 09:46:13 qb-storage kernel: [<ffffffff81094d30>] ? worker_thread+0x170/0x2a0 Jan 30 09:46:13 qb-storage kernel: [<ffffffff8109b2b0>] ? autoremove_wake_function+0x0/0x40 Jan 30 09:46:13 qb-storage kernel: [<ffffffff81094bc0>] ? worker_thread+0x0/0x2a0 Jan 30 09:46:13 qb-storage kernel: [<ffffffff8109af06>] ? kthread+0x96/0xa0 Jan 30 09:46:13 q...
2019 Aug 06
2
Xorg indefinitely hangs in kernelspace
..._modeset_lock_all_ctx+0x5d/0xe0 [drm] [354073.762251] drm_modeset_lock_all+0x5e/0xb0 [drm] [354073.762252] qxl_display_read_client_monitors_config+0x1e1/0x370 [qxl] [354073.762254] qxl_client_monitors_config_work_func+0x15/0x20 [qxl] [354073.762256] process_one_work+0x20f/0x410 [354073.762257] worker_thread+0x34/0x400 [354073.762259] kthread+0x120/0x140 [354073.762260] ? process_one_work+0x410/0x410 [354073.762261] ? __kthread_parkme+0x70/0x70 [354073.762262] ret_from_fork+0x35/0x40 [354194.557095] INFO: task Xorg:920 blocked for more than 241 seconds. [354194.558311] Not tainted 5.2.0-05020...
2019 Aug 06
2
Xorg indefinitely hangs in kernelspace
..._modeset_lock_all_ctx+0x5d/0xe0 [drm] [354073.762251] drm_modeset_lock_all+0x5e/0xb0 [drm] [354073.762252] qxl_display_read_client_monitors_config+0x1e1/0x370 [qxl] [354073.762254] qxl_client_monitors_config_work_func+0x15/0x20 [qxl] [354073.762256] process_one_work+0x20f/0x410 [354073.762257] worker_thread+0x34/0x400 [354073.762259] kthread+0x120/0x140 [354073.762260] ? process_one_work+0x410/0x410 [354073.762261] ? __kthread_parkme+0x70/0x70 [354073.762262] ret_from_fork+0x35/0x40 [354194.557095] INFO: task Xorg:920 blocked for more than 241 seconds. [354194.558311] Not tainted 5.2.0-05020...
2018 Aug 06
1
[PATCH v4 7/8] drm/nouveau: Fix deadlocks in nouveau_connector_detect()
...0] schedule+0x33/0x90 > [ 861.489744] rpm_resume+0x19c/0x850 > [ 861.490392] ? finish_wait+0x90/0x90 > [ 861.491068] __pm_runtime_resume+0x4e/0x90 > [ 861.491753] nouveau_display_hpd_work+0x22/0x60 [nouveau] > [ 861.492416] process_one_work+0x231/0x620 > [ 861.493068] worker_thread+0x44/0x3a0 > [ 861.493722] kthread+0x12b/0x150 > [ 861.494342] ? wq_pool_ids_show+0x140/0x140 > [ 861.494991] ? kthread_create_worker_on_cpu+0x70/0x70 > [ 861.495648] ret_from_fork+0x3a/0x50 > [ 861.496304] INFO: task kworker/6:2:320 blocked for more than 120 seconds. > [...
2014 Oct 20
1
Virtio_config BUG with 3.18-rc1
...all Trace: [ 2.201004] [<ffffffffa020f757>] add_port+0x3b7/0x3e0 [virtio_console] [ 2.201004] [<ffffffffa020ffdc>] control_work_handler+0x39c/0x3e8 [virtio_console] [ 2.201004] [<ffffffff810af9e9>] process_one_work+0x149/0x3d0 [ 2.201004] [<ffffffff810b006b>] worker_thread+0x11b/0x490 [ 2.201004] [<ffffffff810aff50>] ? rescuer_thread+0x2e0/0x2e0 [ 2.201004] [<ffffffff810b5218>] kthread+0xd8/0xf0 [ 2.201004] [<ffffffff810b5140>] ? kthread_create_on_node+0x1b0/0x1b0 [ 2.201004] [<ffffffff8174b53c>] ret_from_fork+0x7c/0xb0 [ 2....
2014 Oct 20
1
Virtio_config BUG with 3.18-rc1
...all Trace: [ 2.201004] [<ffffffffa020f757>] add_port+0x3b7/0x3e0 [virtio_console] [ 2.201004] [<ffffffffa020ffdc>] control_work_handler+0x39c/0x3e8 [virtio_console] [ 2.201004] [<ffffffff810af9e9>] process_one_work+0x149/0x3d0 [ 2.201004] [<ffffffff810b006b>] worker_thread+0x11b/0x490 [ 2.201004] [<ffffffff810aff50>] ? rescuer_thread+0x2e0/0x2e0 [ 2.201004] [<ffffffff810b5218>] kthread+0xd8/0xf0 [ 2.201004] [<ffffffff810b5140>] ? kthread_create_on_node+0x1b0/0x1b0 [ 2.201004] [<ffffffff8174b53c>] ret_from_fork+0x7c/0xb0 [ 2....
2017 Jan 24
1
[PATCH 2/2] drm/nouveau: Queue hpd_work on (runtime) resume
...ffff8c5fffee>] rpm_suspend+0x11e/0x6f0 [ 246.899701] [<ffffffff8c60149b>] pm_runtime_work+0x7b/0xc0 [ 246.899707] [<ffffffff8c0afe58>] process_one_work+0x1f8/0x750 [ 246.899710] [<ffffffff8c0afdd9>] ? process_one_work+0x179/0x750 [ 246.899713] [<ffffffff8c0b03fb>] worker_thread+0x4b/0x4f0 [ 246.899717] [<ffffffff8c0bf8fc>] ? preempt_count_sub+0x4c/0x80 [ 246.899720] [<ffffffff8c0b03b0>] ? process_one_work+0x750/0x750 [ 246.899723] [<ffffffff8c0b7212>] kthread+0x102/0x120 [ 246.899728] [<ffffffff8c0ef546>] ? trace_hardirqs_on_caller+0x16/0x1...
2018 Jul 16
0
[PATCH 2/5] drm/nouveau: Grab RPM ref while probing outputs
...20/0xb0 [drm_kms_helper] [ 246.689420] drm_fb_helper_output_poll_changed+0x23/0x30 [drm_kms_helper] [ 246.690462] drm_kms_helper_hotplug_event+0x2a/0x30 [drm_kms_helper] [ 246.691570] output_poll_execute+0x198/0x1c0 [drm_kms_helper] [ 246.692611] process_one_work+0x231/0x620 [ 246.693725] worker_thread+0x214/0x3a0 [ 246.694756] kthread+0x12b/0x150 [ 246.695856] ? wq_pool_ids_show+0x140/0x140 [ 246.696888] ? kthread_create_worker_on_cpu+0x70/0x70 [ 246.697998] ret_from_fork+0x3a/0x50 [ 246.699034] INFO: task kworker/0:1:60 blocked for more than 120 seconds. [ 246.700153] Not tainte...
2018 Aug 13
6
[PATCH v7 0/5] Fix connector probing deadlocks from RPM bugs
Latest version of https://patchwork.freedesktop.org/series/46815/ , with one small change re: ilia Lyude Paul (5): drm/nouveau: Fix bogus drm_kms_helper_poll_enable() placement drm/nouveau: Remove duplicate poll_enable() in pmops_runtime_suspend() drm/nouveau: Fix deadlock with fb_helper with async RPM requests drm/nouveau: Use pm_runtime_get_noresume() in connector_detect()