search for: __wake_up_common

Displaying 20 results from an estimated 34 matches for "__wake_up_common".

2017 Dec 14
1
Xen PV DomU running Kernel 4.14.5-1.el7.elrepo.x86_64: xl -v vcpu-set <domU> <val> triggers domU kernel WARNING, then domU becomes unresponsive
...0000) knlGS:0000000000000000 CS: e033 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000010 CR3: 000000000683a000 CR4: 0000000000042660 Call Trace: ? coretemp_add_core+0x50/0x50 [coretemp] cpuhp_invoke_callback+0xe9/0x700 ? put_prev_task_fair+0x26/0x40 ? __schedule+0x2d0/0x6e0 ? __wake_up_common+0x84/0x130 ? __wake_up_common+0x84/0x130 cpuhp_thread_fun+0xee/0x170 smpboot_thread_fn+0x10c/0x160 ? smpboot_create_threads+0x80/0x80 kthread+0x10a/0x140 ? kthread_probe_data+0x40/0x40 ret_from_fork+0x1f/0x30 Code: 11 15 41 e1 49 89 c5 b8 f4 ff ff ff 4d 85 ed 0f 84 66 ff ff ff 4c 89...
2008 Jun 18
2
Trouble brewing in dmesg... any ideas?
...lt;c04d7a49>] blk_plug_device+0x5e/0x85 [<f884e2e2>] make_request+0x520/0x52a [raid1] [<f8860679>] journal_stop+0x1b0/0x1ba [jbd] [<c041fa31>] enqueue_task+0x29/0x39 [<c041f8de>] task_rq_lock+0x31/0x58 [<c04202a7>] try_to_wake_up+0x371/0x37b [<c041ea84>] __wake_up_common+0x2f/0x53 [<c041f871>] __wake_up+0x2a/0x3d [<c04806ab>] core_sys_select+0x2a9/0x2ca [<c052e771>] n_tty_receive_buf+0xc5e/0xcab [<c041ea84>] __wake_up_common+0x2f/0x53 [<c041f871>] __wake_up+0x2a/0x3d [<c0529c5e>] tty_wakeup+0x44/0x48 [<c04361fd>] rem...
2009 Jan 27
13
[Patch] fix xenfb_update_screen bogus rect
...pdate_screen bogus rect 2147483647 0 2147483647 0 BUG: warning at /root/linux-2.6.18-xen.hg/drivers/xen/fbfront/xenfb.c:240/xenfb_update_screen() Call Trace: [<ffffffff8036920e>] xenfb_thread+0x19b/0x2be [<ffffffff8022730a>] try_to_wake_up+0x33b/0x34c [<ffffffff80225c3d>] __wake_up_common+0x3e/0x68 [<ffffffff80241e20>] autoremove_wake_function+0x0/0x2e [<ffffffff80241a75>] keventd_create_kthread+0x0/0x61 [<ffffffff80369073>] xenfb_thread+0x0/0x2be [<ffffffff80241a75>] keventd_create_kthread+0x0/0x61 [<ffffffff80241ceb>] kthread+0xd4/0x109 [&...
2017 Dec 12
5
Xen PV DomU running Kernel 4.14.5-1.el7.elrepo.x86_64: xl -v vcpu-set <domU> <val> triggers domU kernel WARNING, then domU becomes unresponsive
...3 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: ffffffffff600400 CR3: 000000026d953000 CR4: 0000000000042660 Call Trace: blk_mq_run_work_fn+0x31/0x40 process_one_work+0x174/0x440 ? xen_mc_flush+0xad/0x1b0 ? schedule+0x3a/0xa0 worker_thread+0x6b/0x410 ? default_wake_function+0x12/0x20 ? __wake_up_common+0x84/0x130 ? maybe_create_worker+0x120/0x120 ? schedule+0x3a/0xa0 ? _raw_spin_unlock_irqrestore+0x16/0x20 ? maybe_create_worker+0x120/0x120 kthread+0x111/0x150 ? __kthread_init_worker+0x40/0x40 ret_from_fork+0x25/0x30 Code: 89 df e8 06 2f d9 ff 4c 89 e7 41 89 c5 e8 0b 6e 00 00 44 89 e...
2013 Feb 13
0
Re: Heavy memory leak when using quota groups
...10d/0x3d0 [btrfs] > [ 5123.800374] [<ffffffff8106a3a0>] ? cascade+0xa0/0xa0 > [ 5123.800384] [<ffffffffa0549935>] finish_ordered_fn+0x15/0x20 [btrfs] > [ 5123.800394] [<ffffffffa056ac2f>] worker_loop+0x16f/0x5d0 [btrfs] > [ 5123.800401] [<ffffffff810888a8>] ? __wake_up_common+0x58/0x90 > [ 5123.800411] [<ffffffffa056aac0>] ? btrfs_queue_worker+0x310/0x310 [btrfs] > [ 5123.800415] [<ffffffff8107f080>] kthread+0xc0/0xd0 > [ 5123.800417] [<ffffffff8107efc0>] ? flush_kthread_worker+0xb0/0xb0 > [ 5123.800423] [<ffffffff816f452c>] ret_f...
2014 May 29
2
Divide error in kvm_unlock_kick()
Paolo Bonzini <pbonzini at redhat.com> wrote: > Il 29/05/2014 19:45, Chris Webb ha scritto: >> Chris Webb <chris at arachsys.com> wrote: >> >>> My CPU flags inside the crashing guest look like this: >>> >>> fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush >>> mmx fxsr sse sse2 ht syscall nx mmxext
2014 May 29
2
Divide error in kvm_unlock_kick()
Paolo Bonzini <pbonzini at redhat.com> wrote: > Il 29/05/2014 19:45, Chris Webb ha scritto: >> Chris Webb <chris at arachsys.com> wrote: >> >>> My CPU flags inside the crashing guest look like this: >>> >>> fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush >>> mmx fxsr sse sse2 ht syscall nx mmxext
2003 May 22
0
use-after-free in smbfs on 2.5.69-mm5
...e90>] __generic_file_aio_read+0x184/0x1a0 [<c0137bfc>] file_read_actor+0x0/0x110 [<c0137f77>] generic_file_read+0x7f/0x9c [<c015763f>] do_sync_write+0x7f/0xb0 [<ec8f0f18>] rtc_wait+0x18/0x20 [rtc] [<c0117383>] default_wake_function+0x17/0x1c [<c01173c2>] __wake_up_common+0x3a/0x54 [<ec8f0f00>] rtc_wait+0x0/0x20 [rtc] [<ec8f0f30>] rtc_task_lock+0x0/0x18 [rtc] [<c016b61a>] kill_fasync+0x16/0x1c [<ec991c32>] smb_file_read+0x4e/0x5c [smbfs] [<c015758e>] vfs_read+0xa2/0xd4 [<c0157770>] sys_read+0x30/0x50 [<c0109853>] sysc...
2014 Jun 01
0
Divide error in kvm_unlock_kick()
...000000000000003 Call Trace: <IRQ> [<ffffffff815852d0>] _raw_spin_unlock+0x36/0x5b [<ffffffff810dd694>] try_to_wake_up+0x1f4/0x217 [<ffffffff810dd6f6>] default_wake_function+0xd/0xf [<ffffffff810e99f0>] autoremove_wake_function+0xd/0x2f [<ffffffff810e944f>] __wake_up_common+0x50/0x7c [<ffffffff810e962f>] __wake_up+0x34/0x46 [<ffffffff810f3b45>] rsp_wakeup+0x1c/0x1e [<ffffffff81112e31>] irq_work_run+0x77/0x9b [<ffffffff810063e2>] smp_irq_work_interrupt+0x2a/0x31 [<ffffffff8158739d>] irq_work_interrupt+0x6d/0x80 [<ffffffff81585336&...
2008 Aug 09
4
Upgrade 3.0.3 to 3.2.1
Hi, i''m prepering to upgrade my servers from xen 3.0.3 32-bit to 3.2.1 64-bit. The old system: Debian 4.0 i386 with included hypervisor 3.0.3 (pae) and dom0 kernel. The new systen: Debian lenny amd64 with the included hypervisor 3.2.1 and dom0 kernel from Debian 4.0 amd64. My domUs have a self compiled kernel out of the dom0 kernel of the old system (mainly the dom0 kernel but
2011 Feb 25
2
Bug inkvm_set_irq
...685.246123] [<ffffffff8106b6f2>] ? process_one_work+0x112/0x460 Feb 23 13:56:19 ayrshire.u06.univ-nantes.prive kernel: [ 685.246123] [<ffffffff8106be25>] ? worker_thread+0x145/0x410 Feb 23 13:56:19 ayrshire.u06.univ-nantes.prive kernel: [ 685.246123] [<ffffffff8103a3d0>] ? __wake_up_common+0x50/0x80 Feb 23 13:56:19 ayrshire.u06.univ-nantes.prive kernel: [ 685.246123] [<ffffffff8106bce0>] ? worker_thread+0x0/0x410 Feb 23 13:56:19 ayrshire.u06.univ-nantes.prive kernel: [ 685.246123] [<ffffffff8106bce0>] ? worker_thread+0x0/0x410 Feb 23 13:56:19 ayrshire.u06.univ-nante...
2011 Feb 25
2
Bug inkvm_set_irq
...685.246123] [<ffffffff8106b6f2>] ? process_one_work+0x112/0x460 Feb 23 13:56:19 ayrshire.u06.univ-nantes.prive kernel: [ 685.246123] [<ffffffff8106be25>] ? worker_thread+0x145/0x410 Feb 23 13:56:19 ayrshire.u06.univ-nantes.prive kernel: [ 685.246123] [<ffffffff8103a3d0>] ? __wake_up_common+0x50/0x80 Feb 23 13:56:19 ayrshire.u06.univ-nantes.prive kernel: [ 685.246123] [<ffffffff8106bce0>] ? worker_thread+0x0/0x410 Feb 23 13:56:19 ayrshire.u06.univ-nantes.prive kernel: [ 685.246123] [<ffffffff8106bce0>] ? worker_thread+0x0/0x410 Feb 23 13:56:19 ayrshire.u06.univ-nante...
2003 Nov 16
1
Bug in 2.6.0-9
...09016>] do_divide_error+0x0/0xa7 [<c010929a>] do_invalid_op+0x8a/0x93 [<c017637f>] journal_add_journal_head+0x70/0xda [<c0110cbd>] try_to_wake_up+0xa0/0x141 [<c0110d54>] try_to_wake_up+0x137/0x141 [<c01118e9>] default_wake_function+0x16/0x18 [<c0111913>] __wake_up_common+0x28/0x4b [<c0108a6d>] error_code+0x2d/0x40 [<c017637f>] journal_add_journal_head+0x70/0xda [<c0171046>] journal_get_write_access+0xb/0x2d [<c016aa53>] ext3_reserve_inode_write+0x33/0x89 [<c016aac1>] ext3_mark_inode_dirty+0x18/0x31 [<c017082b>] journal_star...
2011 Nov 09
12
WARNING: at fs/btrfs/inode.c:2198 btrfs_orphan_commit_root+0xa8/0xc0
...do_async_commit+0x1a/0x30 [btrfs] [ 3924.297873] [<ffffffff81350b70>] ? powersave_bias_target+0x170/0x170 [ 3924.297877] [<ffffffff8104e0bb>] process_one_work+0x10b/0x3d0 [ 3924.297880] [<ffffffff8104e7b6>] worker_thread+0x156/0x410 [ 3924.297884] [<ffffffff81029f59>] ? __wake_up_common+0x59/0x90 [ 3924.297887] [<ffffffff8104e660>] ? rescuer_thread+0x2e0/0x2e0 [ 3924.297890] [<ffffffff810523b6>] kthread+0x96/0xa0 [ 3924.297893] [<ffffffff813feaf4>] kernel_thread_helper+0x4/0x10 [ 3924.297896] [<ffffffff81052320>] ? kthread_worker_fn+0x130/0x130 [ 3924.2...
2006 Oct 26
4
Domain Crash and Xend can''t restart
I have a single VM (of 11) that has a recurring problem. This image has moved from machine to machine, with the problem following it. This image has been rebuilt from scratch, and the problem recurred. It would appear that there is something in the behaviour of this VM which causes it to crash and causes Xend to become unhappy. The problem presents as: Domain crashes, becomes zombie. xm
2002 Dec 06
1
Assertion failure in do_get_write_access() at fs/jbd/transaction.c:746
...c/0x3d8 [<c01091b7>] die+0x73/0x74 [<c01094d0>] do_invalid_op+0x0/0xc0 [<c0109584>] do_invalid_op+0xb4/0xc0 [<c01722f7>] do_get_write_access+0x3f3/0x530 [<c0110fe8>] try_to_wake_up+0x100/0x10c [<c011187d>] default_wake_function+0x1d/0x34 [<c01118c7>] __wake_up_common+0x33/0x4c [<c0111900>] __wake_up+0x20/0x40 [<c0108c85>] error_code+0x2d/0x38 [<c01722f7>] do_get_write_access+0x3f3/0x530 [<c0172487>] journal_get_write_access+0x53/0x78 [<c01695df>] ext3_do_update_inode+0x21f/0x3e0 [<c0169b15>] ext3_reserve_inode_write+0x3...
2017 Dec 19
1
Xen PV DomU running Kernel 4.14.5-1.el7.elrepo.x86_64: xl -v vcpu-set <domU> <val> triggers domU kernel WARNING, then domU becomes unresponsive
...033 > CR2: ffffffffff600400 CR3: 000000026d953000 CR4: 0000000000042660 > Call Trace: > ?blk_mq_run_work_fn+0x31/0x40 > ?process_one_work+0x174/0x440 > ?? xen_mc_flush+0xad/0x1b0 > ?? schedule+0x3a/0xa0 > ?worker_thread+0x6b/0x410 > ?? default_wake_function+0x12/0x20 > ?? __wake_up_common+0x84/0x130 > ?? maybe_create_worker+0x120/0x120 > ?? schedule+0x3a/0xa0 > ?? _raw_spin_unlock_irqrestore+0x16/0x20 > ?? maybe_create_worker+0x120/0x120 > ?kthread+0x111/0x150 > ?? __kthread_init_worker+0x40/0x40 > ?ret_from_fork+0x25/0x30 > Code: 89 df e8 06 2f d9 ff 4c 89 e...
2010 Jun 07
2
Odd INFO "120 seconds" in logs for 2.6.18-194.3.1
...[<ffffffff88ddc753>] :nfsd:nfsd_acceptable+0x0/0xd8 Jun 7 19:45:21 sraid3 kernel: [<ffffffff88de074f>] :nfsd:exp_get_by_name+0x5b/0x71 Jun 7 19:45:21 sraid3 kernel: [<ffffffff88de0d3e>] :nfsd:exp_find_key+0x89/0x9c Jun 7 19:45:21 sraid3 kernel: [<ffffffff8008b4b1>] __wake_up_common+0x3e/0x68 Jun 7 19:45:21 sraid3 kernel: [<ffffffff8009b1dd>] set_current_groups+0x159/0x164 Jun 7 19:45:21 sraid3 kernel: [<ffffffff88dcf7f3>] :exportfs:export_decode_fh+0x4b/0x50 Jun 7 19:45:21 sraid3 kernel: [<ffffffff88ddcac5>] :nfsd:fh_verify+0x29a/0x4bd Jun 7 19:45:...
2006 Jul 31
1
x86_64 reproducible server PANIC with latest kernel
...6} migration/0 S 0000010008002a20 0 2 1 3 (L-TLB) 00000102fc72dec8 0000000000000046 0000010100069760 0000000000000002 00000001fc72dec8 0000000000000000 0000000000000012 0000000000000001 000001010001f7f0 00000000000002b7 Call Trace:<ffffffff80133b66>{__wake_up_common+67} <ffffffff80134c85>{migration_thread+323} <ffffffff80134b42>{migration_thread+0} <ffffffff8014b22a>{kthread+199} <ffffffff80110f47>{child_rip+8} <ffffffff8014b163>{kthread+0} <ffffffff80110f3f>{child_rip+0} ksoftirqd/0 S 00000000000...
2018 Jul 17
2
Samba 4.8.3 out of memory error
...rnel: [<ffffffff811349f1>] ? select_bad_process+0xe1/0x120 Jul 16 14:14:36 soda kernel: [<ffffffff81134ef0>] ? out_of_memory+0x220/0x3c0 Jul 16 14:14:36 soda kernel: [<ffffffff811418e1>] ? __alloc_pages_nodemask+0x941/0x960 Jul 16 14:14:36 soda kernel: [<ffffffff81060d0c>] ? __wake_up_common+0x5c/0x90 Jul 16 14:14:36 soda kernel: [<ffffffff8117aefa>] ? alloc_pages_vma+0x9a/0x150 Jul 16 14:14:36 soda kernel: [<ffffffff8116e2b2>] ? read_swap_cache_async+0xf2/0x160 Jul 16 14:14:36 soda kernel: [<ffffffff8116ee09>] ? valid_swaphandles+0x69/0x160 Jul 16 14:14:36 soda kerne...