Displaying 20 results from an estimated 20 matches for "manage_workers".
2013 Jul 31
3
[PATCH] virtio-scsi: Fix virtqueue affinity setup
...plug_event_func+0xba/0x1a0
[<ffffffff814906c8>] ? acpi_os_release_object+0xe/0x12
[<ffffffff81475911>] _handle_hotplug_event_func+0x31/0x70
[<ffffffff810b5333>] process_one_work+0x183/0x500
[<ffffffff810b66e2>] worker_thread+0x122/0x400
[<ffffffff810b65c0>] ? manage_workers+0x2d0/0x2d0
[<ffffffff810bc5de>] kthread+0xce/0xe0
[<ffffffff810bc510>] ? kthread_freezable_should_stop+0x70/0x70
[<ffffffff81ca045c>] ret_from_fork+0x7c/0xb0
[<ffffffff810bc510>] ? kthread_freezable_should_stop+0x70/0x70
Code: 01 00 00 00 74 59 45 31 e4 83 bb c8 01...
2013 Jul 31
3
[PATCH] virtio-scsi: Fix virtqueue affinity setup
...plug_event_func+0xba/0x1a0
[<ffffffff814906c8>] ? acpi_os_release_object+0xe/0x12
[<ffffffff81475911>] _handle_hotplug_event_func+0x31/0x70
[<ffffffff810b5333>] process_one_work+0x183/0x500
[<ffffffff810b66e2>] worker_thread+0x122/0x400
[<ffffffff810b65c0>] ? manage_workers+0x2d0/0x2d0
[<ffffffff810bc5de>] kthread+0xce/0xe0
[<ffffffff810bc510>] ? kthread_freezable_should_stop+0x70/0x70
[<ffffffff81ca045c>] ret_from_fork+0x7c/0xb0
[<ffffffff810bc510>] ? kthread_freezable_should_stop+0x70/0x70
Code: 01 00 00 00 74 59 45 31 e4 83 bb c8 01...
2011 Sep 10
12
WARNING: at fs/btrfs/inode.c:2193 btrfs_orphan_commit_root+0xb0/0xc0 [btrfs]()
...mmit_transaction+0x870/0x870 [btrfs]
[ 5472.100155] [<ffffffffa0039b0f>] do_async_commit+0x1f/0x30 [btrfs]
[ 5472.100171] [<ffffffff8108110d>] process_one_work+0x11d/0x430
[ 5472.100187] [<ffffffff81081c69>] worker_thread+0x169/0x360
[ 5472.100203] [<ffffffff81081b00>] ? manage_workers.clone.21+0x240/0x240
[ 5472.100220] [<ffffffff81086496>] kthread+0x96/0xa0
[ 5472.100236] [<ffffffff815e5bb4>] kernel_thread_helper+0x4/0x10
[ 5472.100253] [<ffffffff81086400>] ? flush_kthread_worker+0xb0/0xb0
[ 5472.100269] [<ffffffff815e5bb0>] ? gs_change+0x13/0x13
[ 5...
2017 Dec 02
0
nouveau: refcount_t splat on 4.15-rc1 on nv50
.../0x2c0
[ 10.053874] nouveau_drm_probe+0x1b9/0x240 [nouveau]
[ 10.058986] ? __pm_runtime_resume+0x68/0xb0
[ 10.063409] local_pci_probe+0x5e/0xf0
[ 10.067300] work_for_cpu_fn+0x10/0x30
[ 10.071183] process_one_work+0x21a/0x670
[ 10.075325] worker_thread+0x256/0x500
[ 10.079208] ? manage_workers+0x1e0/0x1e0
[ 10.083362] kthread+0x169/0x220
[ 10.086730] ? kthread_create_worker_on_cpu+0x40/0x40
[ 10.091933] ret_from_fork+0x1f/0x30
[ 10.095655] Code: ff 84 c0 74 02 5b c3 0f b6 1d 59 b2 a6 01 80 fb 01 77 1c 83 e3 01 75 ed 48 c7 c7 c8 f1 3f 82 c6 05 41 b2 a6 01 01 e8 50 02 8d ff <...
2017 Oct 23
1
problems running a vol over IPoIB, and qemu off it?
...017]? [<ffffffffc06e1608>]
ipoib_cm_tx_start+0x268/0x3f0 [ib_ipoib]
[Mon Oct 23 16:43:32 2017]? [<ffffffff810a881a>]
process_one_work+0x17a/0x440
[Mon Oct 23 16:43:32 2017]? [<ffffffff810a94e6>]
worker_thread+0x126/0x3c0
[Mon Oct 23 16:43:32 2017]? [<ffffffff810a93c0>] ?
manage_workers.isra.24+0x2a0/0x2a0
[Mon Oct 23 16:43:32 2017]? [<ffffffff810b098f>]
kthread+0xcf/0xe0
[Mon Oct 23 16:43:32 2017]? [<ffffffff810b08c0>] ?
insert_kthread_work+0x40/0x40
[Mon Oct 23 16:43:32 2017]? [<ffffffff816b4f58>]
ret_from_fork+0x58/0x90
[Mon Oct 23 16:43:32 2017]? [<ffff...
2019 Aug 02
1
nouveau problem
...ntime_work+0x8a/0xc0
Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffff946baf9f>]
process_one_work+0x17f/0x440
Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffff946bc036>]
worker_thread+0x126/0x3c0
Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffff946bbf10>] ?
manage_workers.isra.25+0x2a0/0x2a0
Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffff946c2e81>]
kthread+0xd1/0xe0
Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffff946c2db0>] ?
insert_kthread_work+0x40/0x40
Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffff94d76c1d>]
ret_fro...
2013 Mar 27
0
OCFS2 issues reports, any ideads or patches, Thanks
...overy+0x90/0x90 [ocfs2]
Mar 27 10:54:08 cvk-7 kernel: [ 361.374561] [<ffffffff81084e2a>] process_one_work+0x11a/0x480
Mar 27 10:54:08 cvk-7 kernel: [ 361.374565] [<ffffffff81085bd4>] worker_thread+0x164/0x370
Mar 27 10:54:08 cvk-7 kernel: [ 361.374570] [<ffffffff81085a70>] ? manage_workers.isra.29+0x130/0x130
Mar 27 10:54:08 cvk-7 kernel: [ 361.374574] [<ffffffff8108a42c>] kthread+0x8c/0xa0
Mar 27 10:54:08 cvk-7 kernel: [ 361.374579] [<ffffffff81666bf4>] kernel_thread_helper+0x4/0x10
Mar 27 10:54:08 cvk-7 kernel: [ 361.374583] [<ffffffff8108a3a0>] ? flush_kthr...
2012 Aug 24
4
[PATCH] Btrfs: pass lockdep rwsem metadata to async commit transaction
The freeze rwsem is taken by sb_start_intwrite() and dropped during the
commit_ or end_transaction(). In the async case, that happens in a worker
thread. Tell lockdep the calling thread is releasing ownership of the
rwsem and the async thread is picking it up.
Josef and I worked out a more complicated solution that made the async
commit thread join and potentially get a later transaction, but
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
...a/0xb0 [ocfs2]
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124232] [<ffffffff81084e2a>] process_one_work+0x11a/0x480
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124236] [<ffffffff81085bd4>] worker_thread+0x164/0x370
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124241] [<ffffffff81085a70>] ? manage_workers.isra.29+0x130/0x130
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124246] [<ffffffff8108a42c>] kthread+0x8c/0xa0
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124251] [<ffffffff81666bf4>] kernel_thread_helper+0x4/0x10
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124255] [<ffffffff8108a3a0>] ? f...
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
...a/0xb0 [ocfs2]
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124232] [<ffffffff81084e2a>] process_one_work+0x11a/0x480
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124236] [<ffffffff81085bd4>] worker_thread+0x164/0x370
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124241] [<ffffffff81085a70>] ? manage_workers.isra.29+0x130/0x130
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124246] [<ffffffff8108a42c>] kthread+0x8c/0xa0
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124251] [<ffffffff81666bf4>] kernel_thread_helper+0x4/0x10
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124255] [<ffffffff8108a3a0>] ? f...
2011 Dec 20
8
ocfs2 - Kernel panic on many write/read from both
Sorry i don`t copy everything:
TEST-MAIL1# echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc
debugfs.ocfs2 1.6.4
5239722 26198604 246266859
TEST-MAIL1# echo "ls //orphan_dir:0001"|debugfs.ocfs2 /dev/dm-0|wc
debugfs.ocfs2 1.6.4
6074335 30371669 285493670
TEST-MAIL2 ~ # echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc
debugfs.ocfs2 1.6.4
5239722 26198604
2012 Jul 25
9
Regression in kernel 3.5 as Dom0 regarding PCI Passthrough?!
Hi!
i notice a serious regression with 3.5 as Dom0 kernel (3.4 was rock
stable):
1st: only the GPU PCI Passthrough works, the PCI USB Controller is not
recognized within the DomU (HVM Win7 64)
Dom0 cmdline is:
ro root=LABEL=dom0root
xen-pciback.hide=(08:00.0)(08:00.1)(00:1d.0)(00:1d.1)(00:1d.2)(00:1d.7)
security=apparmor noirqdebug nouveau.msi=1
Only 8:00.0 and 8:00.1 get passed through
2012 Nov 15
5
[Bug 57151] New: repeatable nouveau driver crashes/hangs during resume on Dell Latitude E6510 when drm.debug=14
...14 23:21:27 karolszk-lap kernel: [ 613.414816] [<f8d77790>] ?
drm_helper_mode_fill_fb_struct+0x30/0x30 [drm_kms_helper]
Nov 14 23:21:27 karolszk-lap kernel: [ 613.414824] [<c10754e4>]
worker_thread+0x124/0x2d0
Nov 14 23:21:27 karolszk-lap kernel: [ 613.414831] [<c10753c0>] ?
manage_workers.isra.28+0x110/0x110
Nov 14 23:21:27 karolszk-lap kernel: [ 613.414839] [<c10792dd>]
kthread+0x6d/0x80
Nov 14 23:21:27 karolszk-lap kernel: [ 613.414846] [<c1079270>] ?
flush_kthread_worker+0x80/0x80
Nov 14 23:21:27 karolszk-lap kernel: [ 613.414853] [<c15ae33e>]
kernel_threa...
2013 Aug 22
5
[Bug 68456] New: kernel NULL pointer dereference on 'modprobe nouveau'
...kernel: [<ffffffff8105350e>] ? work_for_cpu_fn+0xb/0x11
kernel: [<ffffffff8105502d>] ? process_one_work+0x1c1/0x2c8
kernel: [<ffffffff8105514c>] ? process_scheduled_works+0x18/0x25
kernel: [<ffffffff81055875>] ? worker_thread+0x1eb/0x29b
kernel: [<ffffffff8105568a>] ? manage_workers.isra.25+0x1ae/0x1ae
kernel: [<ffffffff81059f28>] ? kthread+0xad/0xb5
kernel: [<ffffffff81059e7b>] ? __kthread_parkme+0x5e/0x5e
kernel: [<ffffffff813b0d6c>] ? ret_from_fork+0x7c/0xb0
kernel: [<ffffffff81059e7b>] ? __kthread_parkme+0x5e/0x5e
kernel: Code: 48 81 c6 80 dc 00 00...
2012 Apr 20
44
Ceph on btrfs 3.4rc
...mmit_transaction+0xa50/0xa50 [btrfs]
[87703.897271] [<ffffffffa035205f>] do_async_commit+0x1f/0x30 [btrfs]
[87703.904262] [<ffffffff81068949>] process_one_work+0x129/0x450
[87703.910777] [<ffffffff8106b7eb>] worker_thread+0x17b/0x3c0
[87703.916991] [<ffffffff8106b670>] ? manage_workers+0x220/0x220
[87703.923504] [<ffffffff810703fe>] kthread+0x9e/0xb0
[87703.928952] [<ffffffff8158c224>] kernel_thread_helper+0x4/0x10
[87703.935555] [<ffffffff81070360>] ? kthread_freezable_should_stop+0x70/0x70
[87703.943323] [<ffffffff8158c220>] ? gs_change+0x13/0x13
[87...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...f5c>] xfs_log_force+0x2c/0x70 [xfs]
[ 8400.187237] [<ffffffffc049ffd6>] xfs_log_worker+0x36/0x100 [xfs]
[ 8400.187241] [<ffffffff960b312f>] process_one_work+0x17f/0x440
[ 8400.187245] [<ffffffff960b3df6>] worker_thread+0x126/0x3c0
[ 8400.187249] [<ffffffff960b3cd0>] ? manage_workers.isra.24+0x2a0/0x2a0
[ 8400.187253] [<ffffffff960bb161>] kthread+0xd1/0xe0
[ 8400.187257] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40
[ 8400.187261] [<ffffffff96720677>] ret_from_fork_nospec_begin+0x21/0x21
[ 8400.187265] [<ffffffff960bb090>] ? insert_kthread_wor...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...ce+0x2c/0x70 [xfs]
> [ 8400.187237] [<ffffffffc049ffd6>] xfs_log_worker+0x36/0x100 [xfs]
> [ 8400.187241] [<ffffffff960b312f>] process_one_work+0x17f/0x440
> [ 8400.187245] [<ffffffff960b3df6>] worker_thread+0x126/0x3c0
> [ 8400.187249] [<ffffffff960b3cd0>] ? manage_workers.isra.24+0x2a0/0x2a0
> [ 8400.187253] [<ffffffff960bb161>] kthread+0xd1/0xe0
> [ 8400.187257] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40
> [ 8400.187261] [<ffffffff96720677>] ret_from_fork_nospec_begin+0x21/0x21
> [ 8400.187265] [<ffffffff960bb090>]...
2018 Jun 01
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...s]
>> [ 8400.187237] [<ffffffffc049ffd6>] xfs_log_worker+0x36/0x100 [xfs]
>> [ 8400.187241] [<ffffffff960b312f>] process_one_work+0x17f/0x440
>> [ 8400.187245] [<ffffffff960b3df6>] worker_thread+0x126/0x3c0
>> [ 8400.187249] [<ffffffff960b3cd0>] ? manage_workers.isra.24+0x2a0/0x2a0
>> [ 8400.187253] [<ffffffff960bb161>] kthread+0xd1/0xe0
>> [ 8400.187257] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40
>> [ 8400.187261] [<ffffffff96720677>] ret_from_fork_nospec_begin+0x21/0x21
>> [ 8400.187265] [<ffffff...
2018 May 30
2
[ovirt-users] Re: Gluster problems, cluster performance issues
The profile seems to suggest very high latencies on the brick at
ovirt1.nwfiber.com:/gluster/brick1/engine
ovirt2.* shows decent numbers. Is everything OK with the brick on ovirt1?
Are the bricks of engine volume on both these servers identical in terms of
their config?
-Krutika
On Wed, May 30, 2018 at 3:07 PM, Jim Kusznir <jim at palousetech.com> wrote:
> Hi:
>
> Thank you. I
2017 Aug 23
2
virt-sysprep: error: no operating systems were found in the guest image on libguestfs-1.36.5
..._buf_iodone_work+0x85/0x100 [xfs]
[ 2.364667] [<ffffffffa01faf35>] xfs_buf_iodone_work+0x85/0x100 [xfs]
[ 2.365701] [<ffffffff81078ef9>] process_one_work+0x179/0x460
[ 2.366621] [<ffffffff81079fb6>] worker_thread+0x116/0x3b0
[ 2.367501] [<ffffffff81079ea0>] ? manage_workers.isra.25+0x290/0x290
[ 2.368523] [<ffffffff81080340>] kthread+0xc0/0xd0
[ 2.369296] [<ffffffff81080280>] ? insert_kthread_work+0x40/0x40
[ 2.370252] [<ffffffff815a6088>] ret_from_fork+0x58/0x90
[ 2.371096] [<ffffffff81080280>] ? insert_kthread_work+0x40/0x40
[...