Displaying 20 results from an estimated 53 matches for "0x750".
Did you mean:
  0x50
  
2017 Jan 24
1
[PATCH 2/2] drm/nouveau: Queue hpd_work on (runtime) resume
...m_callback+0x24/0x80
[  246.899695]  [<ffffffff8c4ced30>] ? pci_pm_runtime_resume+0xa0/0xa0
[  246.899698]  [<ffffffff8c5fffee>] rpm_suspend+0x11e/0x6f0
[  246.899701]  [<ffffffff8c60149b>] pm_runtime_work+0x7b/0xc0
[  246.899707]  [<ffffffff8c0afe58>] process_one_work+0x1f8/0x750
[  246.899710]  [<ffffffff8c0afdd9>] ? process_one_work+0x179/0x750
[  246.899713]  [<ffffffff8c0b03fb>] worker_thread+0x4b/0x4f0
[  246.899717]  [<ffffffff8c0bf8fc>] ? preempt_count_sub+0x4c/0x80
[  246.899720]  [<ffffffff8c0b03b0>] ? process_one_work+0x750/0x750
[  246.899...
2020 Jan 09
1
[BUG] nouveau lockdep splat
...lock+0x134/0xc70
[   98.459526]        nouveau_svmm_invalidate_range_start+0x71/0x110 [nouveau]
[   98.466593]        __mmu_notifier_invalidate_range_start+0x25c/0x320
[   98.473031]        unmap_vmas+0x10c/0x200
[   98.477130]        unmap_region+0x1a4/0x240
[   98.481410]        __do_munmap+0x3e0/0x750
[   98.485535]        __vm_munmap+0xbc/0x130
[   98.489599]        __x64_sys_munmap+0x3c/0x50
[   98.493951]        do_syscall_64+0x68/0x280
[   98.498162]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
[   98.503778] 
[   98.503778] -> #2 (mmu_notifier_invalidate_range_start){+.+.}:
[   98.511...
2019 Aug 06
2
Xorg indefinitely hangs in kernelspace
...secs"
disables this message.
[354073.738332] Xorg            D    0   920    854 0x00404004
[354073.738334] Call Trace:
[354073.738340]  __schedule+0x2ba/0x650
[354073.738342]  schedule+0x2d/0x90
[354073.738343]  schedule_preempt_disabled+0xe/0x10
[354073.738345]  __ww_mutex_lock.isra.11+0x3e0/0x750
[354073.738346]  __ww_mutex_lock_slowpath+0x16/0x20
[354073.738347]  ww_mutex_lock+0x34/0x50
[354073.738352]  ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm]
[354073.738356]  qxl_release_reserve_list+0x67/0x150 [qxl]
[354073.738358]  ? qxl_bo_pin+0xaa/0x190 [qxl]
[354073.738359]  qxl_cursor_atomic_update+...
2019 Aug 06
2
Xorg indefinitely hangs in kernelspace
...secs"
disables this message.
[354073.738332] Xorg            D    0   920    854 0x00404004
[354073.738334] Call Trace:
[354073.738340]  __schedule+0x2ba/0x650
[354073.738342]  schedule+0x2d/0x90
[354073.738343]  schedule_preempt_disabled+0xe/0x10
[354073.738345]  __ww_mutex_lock.isra.11+0x3e0/0x750
[354073.738346]  __ww_mutex_lock_slowpath+0x16/0x20
[354073.738347]  ww_mutex_lock+0x34/0x50
[354073.738352]  ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm]
[354073.738356]  qxl_release_reserve_list+0x67/0x150 [qxl]
[354073.738358]  ? qxl_bo_pin+0xaa/0x190 [qxl]
[354073.738359]  qxl_cursor_atomic_update+...
2016 Nov 21
2
[PATCH 1/2] drm/nouveau: Rename acpi_work to hpd_work
We need to call drm_helper_hpd_irq_event() on resume to properly detect
monitor connection / disconnection on some laptops. For runtime-resume
(which gets called on resume from normal suspend too) we must call
drm_helper_hpd_irq_event() from a workqueue to avoid a deadlock.
Rename acpi_work to hpd_work, and move it out of the #ifdef CONFIG_ACPI
blocks to make it suitable for generic work.
2018 Feb 23
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...eously hit a
number of splats in the block layer:
* inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in
  jbd2_trans_will_send_data_barrier
* BUG: sleeping function called from invalid context at mm/mempool.c:320
* WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750
... I've included the full splats at the end of the mail.
These all happen in the context of the virtio block IRQ handler, so I
wonder if this calls something that doesn't expect to be called from IRQ
context. Is it valid to call blk_mq_complete_request() or
blk_mq_end_request() from an I...
2018 Feb 23
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...eously hit a
number of splats in the block layer:
* inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in
  jbd2_trans_will_send_data_barrier
* BUG: sleeping function called from invalid context at mm/mempool.c:320
* WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750
... I've included the full splats at the end of the mail.
These all happen in the context of the virtio block IRQ handler, so I
wonder if this calls something that doesn't expect to be called from IRQ
context. Is it valid to call blk_mq_complete_request() or
blk_mq_end_request() from an I...
2018 Feb 26
0
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...e block layer:
> 
> * inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in
>   jbd2_trans_will_send_data_barrier
> 
> * BUG: sleeping function called from invalid context at mm/mempool.c:320
> 
> * WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750
> 
> ... I've included the full splats at the end of the mail.
> 
> These all happen in the context of the virtio block IRQ handler, so I
> wonder if this calls something that doesn't expect to be called from IRQ
> context. Is it valid to call blk_mq_complete_request() or...
2019 Apr 30
2
Xorg hangs in kernelspace with qxl
...Not tainted 5.0.0-13-generic #14-Ubuntu
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Xorg            D    0   879    790 0x00400004
Call Trace:
 __schedule+0x2d0/0x840
 schedule+0x2c/0x70
 schedule_preempt_disabled+0xe/0x10
 __ww_mutex_lock.isra.11+0x3e0/0x750
 __ww_mutex_lock_slowpath+0x16/0x20
 ww_mutex_lock+0x34/0x50
 ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm]
 qxl_release_reserve_list+0x67/0x150 [qxl]
 ? qxl_bo_pin+0x11d/0x200 [qxl]
 qxl_cursor_atomic_update+0x1b0/0x2e0 [qxl]
 drm_atomic_helper_commit_planes+0xb9/0x220 [drm_kms_helper]
 drm_atomic_help...
2019 Apr 30
2
Xorg hangs in kernelspace with qxl
...Not tainted 5.0.0-13-generic #14-Ubuntu
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Xorg            D    0   879    790 0x00400004
Call Trace:
 __schedule+0x2d0/0x840
 schedule+0x2c/0x70
 schedule_preempt_disabled+0xe/0x10
 __ww_mutex_lock.isra.11+0x3e0/0x750
 __ww_mutex_lock_slowpath+0x16/0x20
 ww_mutex_lock+0x34/0x50
 ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm]
 qxl_release_reserve_list+0x67/0x150 [qxl]
 ? qxl_bo_pin+0x11d/0x200 [qxl]
 qxl_cursor_atomic_update+0x1b0/0x2e0 [qxl]
 drm_atomic_helper_commit_planes+0xb9/0x220 [drm_kms_helper]
 drm_atomic_help...
2019 Sep 06
4
Xorg indefinitely hangs in kernelspace
...ge.
> [354073.738332] Xorg            D    0   920    854 0x00404004
> [354073.738334] Call Trace:
> [354073.738340]  __schedule+0x2ba/0x650
> [354073.738342]  schedule+0x2d/0x90
> [354073.738343]  schedule_preempt_disabled+0xe/0x10
> [354073.738345]  __ww_mutex_lock.isra.11+0x3e0/0x750
> [354073.738346]  __ww_mutex_lock_slowpath+0x16/0x20
> [354073.738347]  ww_mutex_lock+0x34/0x50
> [354073.738352]  ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm]
> [354073.738356]  qxl_release_reserve_list+0x67/0x150 [qxl]
> [354073.738358]  ? qxl_bo_pin+0xaa/0x190 [qxl]
> [354073.7383...
2019 Sep 06
4
Xorg indefinitely hangs in kernelspace
...ge.
> [354073.738332] Xorg            D    0   920    854 0x00404004
> [354073.738334] Call Trace:
> [354073.738340]  __schedule+0x2ba/0x650
> [354073.738342]  schedule+0x2d/0x90
> [354073.738343]  schedule_preempt_disabled+0xe/0x10
> [354073.738345]  __ww_mutex_lock.isra.11+0x3e0/0x750
> [354073.738346]  __ww_mutex_lock_slowpath+0x16/0x20
> [354073.738347]  ww_mutex_lock+0x34/0x50
> [354073.738352]  ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm]
> [354073.738356]  qxl_release_reserve_list+0x67/0x150 [qxl]
> [354073.738358]  ? qxl_bo_pin+0xaa/0x190 [qxl]
> [354073.7383...
2016 Jun 30
6
[PATCH] backlight: Avoid double fbcon backlight handling
...]        [<ffffffff8154e611>] do_bind_con_driver+0x1c1/0x3a0
[   18.984143]        [<ffffffff8154eaf6>] do_take_over_console+0x116/0x180
[   18.984145]        [<ffffffff814bd3a7>] do_fbcon_takeover+0x57/0xb0
[   18.984147]        [<ffffffff814c1e48>] fbcon_event_notify+0x658/0x750
[   18.984150]        [<ffffffff810abcae>] notifier_call_chain+0x3e/0xb0
[   18.984152]        [<ffffffff810ac1ad>] __blocking_notifier_call_chain+0x4d/0x70
[   18.984154]        [<ffffffff810ac1e6>] blocking_notifier_call_chain+0x16/0x20
[   18.984156]        [<ffffffff814c748...
2009 Dec 04
2
[LLVMdev] linking a parser bitcode
...here the sj/lj stuff is coming from.  Does this mean that the LLVM libraries we're using are broken?
Type.cpp
..\..\..\..\llvm\lib/libLLVMCore.a(Type.cpp.obj):Type.cpp.text+0x722): undefined reference to `__gxx_personality_sj0'
..\..\..\..\llvm\lib/libLLVMCore.a(Type.cpp.obj):Type.cpp.text+0x750): undefined reference to `_Unwind_SjLj_Register'
..\..\..\..\llvm\lib/libLLVMCore.a(Type.cpp.obj):Type.cpp.text+0x848): undefined reference to `_Unwind_SjLj_Resume'
..\..\..\..\llvm\lib/libLLVMCore.a(Type.cpp.obj):Type.cpp.text+0xa31): undefined reference to `_Unwind_SjLj_Resume'
Thank...
2018 Aug 05
2
[PATCH net-next 0/6] virtio_net: Add ethtool stat items
...44] R13: 0000000000000000 R14: 00007ffe83f38728 R15: 
00007ffe83f37fd8
[   46.168778] Allocated by task 499:
[   46.168784]  kasan_kmalloc+0xa0/0xd0
[   46.168789]  __kmalloc+0x191/0x3a0
[   46.168795]  mpi_powm+0x956/0x2360
[   46.168801]  rsa_enc+0x1f0/0x3a0
[   46.168806]  pkcs1pad_verify+0x4c4/0x750
[   46.168815]  public_key_verify_signature+0x58b/0xac0
[   46.168821]  pkcs7_validate_trust+0x3bd/0x710
[   46.168830]  verify_pkcs7_signature+0xe8/0x1b0
[   46.168837]  mod_verify_sig+0x1d4/0x2a0
[   46.168842]  load_module+0x1689/0x6590
[   46.168847]  __do_sys_finit_module+0x192/0x1c0
[   46.16...
2018 Aug 05
2
[PATCH net-next 0/6] virtio_net: Add ethtool stat items
...44] R13: 0000000000000000 R14: 00007ffe83f38728 R15: 
00007ffe83f37fd8
[   46.168778] Allocated by task 499:
[   46.168784]  kasan_kmalloc+0xa0/0xd0
[   46.168789]  __kmalloc+0x191/0x3a0
[   46.168795]  mpi_powm+0x956/0x2360
[   46.168801]  rsa_enc+0x1f0/0x3a0
[   46.168806]  pkcs1pad_verify+0x4c4/0x750
[   46.168815]  public_key_verify_signature+0x58b/0xac0
[   46.168821]  pkcs7_validate_trust+0x3bd/0x710
[   46.168830]  verify_pkcs7_signature+0xe8/0x1b0
[   46.168837]  mod_verify_sig+0x1d4/0x2a0
[   46.168842]  load_module+0x1689/0x6590
[   46.168847]  __do_sys_finit_module+0x192/0x1c0
[   46.16...
2019 Sep 06
0
[Spice-devel] Xorg indefinitely hangs in kernelspace
...Xorg            D    0   920    854 0x00404004
> > [354073.738334] Call Trace:
> > [354073.738340]  __schedule+0x2ba/0x650
> > [354073.738342]  schedule+0x2d/0x90
> > [354073.738343]  schedule_preempt_disabled+0xe/0x10
> > [354073.738345]  __ww_mutex_lock.isra.11+0x3e0/0x750
> > [354073.738346]  __ww_mutex_lock_slowpath+0x16/0x20
> > [354073.738347]  ww_mutex_lock+0x34/0x50
> > [354073.738352]  ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm]
> > [354073.738356]  qxl_release_reserve_list+0x67/0x150 [qxl]
> > [354073.738358]  ? qxl_bo_pin+0xaa/0x19...
2018 Jan 10
1
soft lockup after set multicast_router of bridge and it's port to 2
...le_frame_finish+0x0/0x2a0 [bridge]
     [<ffffffff814736b6>] ? nf_hook_slow+0x76/0x120
     [<ffffffffa04f48f0>] ? br_handle_frame_finish+0x0/0x2a0 [bridge]
     [<ffffffffa04f4d1c>] ? br_handle_frame+0x18c/0x250 [bridge]
     [<ffffffff81445709>] ? __netif_receive_skb+0x529/0x750
     [<ffffffff814397da>] ? __alloc_skb+0x7a/0x180
     [<ffffffff814492f8>] ? netif_receive_skb+0x58/0x60
     [<ffffffff81449400>] ? napi_skb_finish+0x50/0x70
     [<ffffffff8144ab79>] ? napi_gro_receive+0x39/0x50
     [<ffffffffa016887f>] ? bnx2x_rx_int+0x83f/0x1630...
2011 May 05
12
Having parent transid verify failed
...el: [13560.752108]  [<ffffffff813b0cf9>] ? 
mutex_unlock+0x9/0x10
May  5 14:15:14 mail kernel: [13560.752115]  [<ffffffffa087e9f4>] ? 
btrfs_run_ordered_operations+0x1f4/0x210 [btrfs]
May  5 14:15:14 mail kernel: [13560.752122]  [<ffffffffa0860fa3>] 
btrfs_commit_transaction+0x263/0x750 [btrfs]
May  5 14:15:14 mail kernel: [13560.752126]  [<ffffffff81079ff0>] ? 
autoremove_wake_function+0x0/0x40
May  5 14:15:14 mail kernel: [13560.752131]  [<ffffffffa085a9bd>] 
transaction_kthread+0x26d/0x290 [btrfs]
May  5 14:15:14 mail kernel: [13560.752137]  [<ffffffffa085a750>...
2018 Feb 26
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...* inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in
> >   jbd2_trans_will_send_data_barrier
> > 
> > * BUG: sleeping function called from invalid context at mm/mempool.c:320
> > 
> > * WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750
> > 
> > ... I've included the full splats at the end of the mail.
> > 
> > These all happen in the context of the virtio block IRQ handler, so I
> > wonder if this calls something that doesn't expect to be called from IRQ
> > context. Is it valid to call...