search for: mark_held_lock

Displaying 20 results from an estimated 48 matches for "mark_held_lock".

Did you mean: mark_held_locks
2019 Jun 14
0
[PATCH v2] drm/nouveau/dmem: missing mutex_lock in error path
...3e0/0x13e0 [nouveau] [ 1295.051912] drm_ioctl_kernel+0x14d/0x1a0 [ 1295.055930] ? drm_setversion+0x330/0x330 [ 1295.059971] drm_ioctl+0x308/0x530 [ 1295.063384] ? drm_version+0x150/0x150 [ 1295.067153] ? find_held_lock+0xac/0xd0 [ 1295.070996] ? __pm_runtime_resume+0x3f/0xa0 [ 1295.075285] ? mark_held_locks+0x29/0xa0 [ 1295.079230] ? _raw_spin_unlock_irqrestore+0x3c/0x50 [ 1295.084232] ? lockdep_hardirqs_on+0x17d/0x250 [ 1295.088768] nouveau_drm_ioctl+0x9a/0x100 [nouveau] [ 1295.093661] do_vfs_ioctl+0x137/0x9a0 [ 1295.097341] ? ioctl_preallocate+0x140/0x140 [ 1295.101623] ? match_held_lock+0x1b...
2019 Jul 26
0
[PATCH AUTOSEL 5.2 85/85] drm/nouveau/dmem: missing mutex_lock in error path
...3e0/0x13e0 [nouveau] [ 1295.051912] drm_ioctl_kernel+0x14d/0x1a0 [ 1295.055930] ? drm_setversion+0x330/0x330 [ 1295.059971] drm_ioctl+0x308/0x530 [ 1295.063384] ? drm_version+0x150/0x150 [ 1295.067153] ? find_held_lock+0xac/0xd0 [ 1295.070996] ? __pm_runtime_resume+0x3f/0xa0 [ 1295.075285] ? mark_held_locks+0x29/0xa0 [ 1295.079230] ? _raw_spin_unlock_irqrestore+0x3c/0x50 [ 1295.084232] ? lockdep_hardirqs_on+0x17d/0x250 [ 1295.088768] nouveau_drm_ioctl+0x9a/0x100 [nouveau] [ 1295.093661] do_vfs_ioctl+0x137/0x9a0 [ 1295.097341] ? ioctl_preallocate+0x140/0x140 [ 1295.101623] ? match_held_lock+0x1b...
2020 Mar 19
2
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
....120463] mmput+0xb5/0x210 [ 138.123444] do_exit+0x602/0x14c0 [ 138.126776] ? mm_update_next_owner+0x400/0x400 [ 138.131329] do_group_exit+0x8a/0x140 [ 138.135006] get_signal+0x25b/0x1080 [ 138.138606] do_signal+0x8c/0xa90 [ 138.141928] ? _raw_spin_unlock_irq+0x24/0x30 [ 138.146292] ? mark_held_locks+0x24/0x90 [ 138.150219] ? _raw_spin_unlock_irq+0x24/0x30 [ 138.154580] ? lockdep_hardirqs_on+0x190/0x280 [ 138.159026] ? setup_sigcontext+0x260/0x260 [ 138.163210] ? sigprocmask+0x10b/0x150 [ 138.166965] ? __x64_sys_rt_sigsuspend+0xe0/0xe0 [ 138.171594] ? __x64_sys_rt_sigprocmask+0xfb/...
2019 Jun 14
1
[PATCH] drm/nouveau/dmem: missing mutex_lock in error path
...3e0/0x13e0 [nouveau] [ 1295.051912] drm_ioctl_kernel+0x14d/0x1a0 [ 1295.055930] ? drm_setversion+0x330/0x330 [ 1295.059971] drm_ioctl+0x308/0x530 [ 1295.063384] ? drm_version+0x150/0x150 [ 1295.067153] ? find_held_lock+0xac/0xd0 [ 1295.070996] ? __pm_runtime_resume+0x3f/0xa0 [ 1295.075285] ? mark_held_locks+0x29/0xa0 [ 1295.079230] ? _raw_spin_unlock_irqrestore+0x3c/0x50 [ 1295.084232] ? lockdep_hardirqs_on+0x17d/0x250 [ 1295.088768] nouveau_drm_ioctl+0x9a/0x100 [nouveau] [ 1295.093661] do_vfs_ioctl+0x137/0x9a0 [ 1295.097341] ? ioctl_preallocate+0x140/0x140 [ 1295.101623] ? match_held_lock+0x1b...
2020 Mar 17
4
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
On 3/17/20 5:59 AM, Christoph Hellwig wrote: > On Tue, Mar 17, 2020 at 09:47:55AM -0300, Jason Gunthorpe wrote: >> I've been using v7 of Ralph's tester and it is working well - it has >> DEVICE_PRIVATE support so I think it can test this flow too. Ralph are >> you able? >> >> This hunk seems trivial enough to me, can we include it now? > > I can send
2019 Jun 14
3
[PATCH] drm/nouveau/dmem: missing mutex_lock in error path
In nouveau_dmem_pages_alloc(), the drm->dmem->mutex is unlocked before calling nouveau_dmem_chunk_alloc(). Reacquire the lock before continuing to the next page. Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> --- I found this while testing Jason Gunthorpe's hmm tree but this is independant of those changes. I guess it could go through David Airlie's tree for nouveau
2014 Oct 13
2
v3.17, i915 vs nouveau: possible recursive locking detected
...20 [<ffffffffa010ae93>] ? i915_gem_unmap_dma_buf+0x33/0xc0 [i915] [<ffffffffa010ae93>] ? i915_gem_unmap_dma_buf+0x33/0xc0 [i915] [<ffffffff8170c014>] mutex_lock_nested+0x54/0x3d0 [<ffffffffa010ae93>] ? i915_gem_unmap_dma_buf+0x33/0xc0 [i915] [<ffffffff810df03a>] ? mark_held_locks+0x6a/0x90 [<ffffffffa010ae93>] i915_gem_unmap_dma_buf+0x33/0xc0 [i915] [<ffffffff814c3032>] dma_buf_unmap_attachment+0x22/0x40 [<ffffffffa0034e42>] drm_prime_gem_destroy+0x22/0x40 [drm] [<ffffffffa0299b5b>] nouveau_gem_object_del+0x3b/0x60 [nouveau] [<ffffffffa001c7...
2014 Oct 20
2
INFO: task echo:622 blocked for more than 120 seconds. - 3.18.0-0.rc0.git
...98 ffffffff81ee2690 [ 240.232931] Call Trace: [ 240.233467] [<ffffffff8185baf9>] schedule+0x29/0x70 [ 240.234025] [<ffffffff81860d1c>] schedule_timeout+0x26c/0x410 [ 240.234562] [<ffffffff81028c4a>] ? native_sched_clock+0x2a/0xa0 [ 240.235118] [<ffffffff811078bc>] ? mark_held_locks+0x7c/0xb0 [ 240.235645] [<ffffffff81861da0>] ? _raw_spin_unlock_irq+0x30/0x50 [ 240.236198] [<ffffffff81107a4d>] ? trace_hardirqs_on_caller+0x15d/0x200 [ 240.236729] [<ffffffff8185d52c>] wait_for_completion+0x10c/0x150 [ 240.237290] [<ffffffff810e51f0>] ? wake_up_st...
2010 Jul 10
1
deadlock possiblity introduced by "drm/nouveau: use drm_mm in preference to custom code doing the same thing"
...0975b0>] __lock_acquire+0x883/0x8f4 [ 2417.747472] [<ffffffff8129f0c0>] ? drm_mm_put_block+0x17a/0x1c0 [ 2417.747475] [<ffffffff81097769>] lock_acquire+0x148/0x18d [ 2417.747477] [<ffffffff8129f0c0>] ? drm_mm_put_block+0x17a/0x1c0 [ 2417.747480] [<ffffffff81094fd7>] ? mark_held_locks+0x52/0x70 [ 2417.747483] [<ffffffff8143b1d9>] _raw_spin_lock+0x36/0x45 [ 2417.747486] [<ffffffff8129f0c0>] ? drm_mm_put_block+0x17a/0x1c0 [ 2417.747490] [<ffffffff8129f0c0>] drm_mm_put_block+0x17a/0x1c0 [ 2417.747496] [<ffffffffa00aed3e>] nouveau_gpuobj_del+0x167/0x1b5...
2014 Oct 16
0
[Intel-gfx] v3.17, i915 vs nouveau: possible recursive locking detected
...0ae93>] ? i915_gem_unmap_dma_buf+0x33/0xc0 [i915] > [<ffffffffa010ae93>] ? i915_gem_unmap_dma_buf+0x33/0xc0 [i915] > [<ffffffff8170c014>] mutex_lock_nested+0x54/0x3d0 > [<ffffffffa010ae93>] ? i915_gem_unmap_dma_buf+0x33/0xc0 [i915] > [<ffffffff810df03a>] ? mark_held_locks+0x6a/0x90 > [<ffffffffa010ae93>] i915_gem_unmap_dma_buf+0x33/0xc0 [i915] > [<ffffffff814c3032>] dma_buf_unmap_attachment+0x22/0x40 > [<ffffffffa0034e42>] drm_prime_gem_destroy+0x22/0x40 [drm] > [<ffffffffa0299b5b>] nouveau_gem_object_del+0x3b/0x60 [nouveau]...
2019 Sep 30
0
[PATCH net v2] vsock: Fix a lockdep warning in __vsock_release()
...lock_acquire+0xc4/0x1a0 ? virtio_transport_release+0x34/0x330 [vmw_vsock_virtio_transport_common] lock_sock_nested+0x5d/0x80 ? virtio_transport_release+0x34/0x330 [vmw_vsock_virtio_transport_common] virtio_transport_release+0x34/0x330 [vmw_vsock_virtio_transport_common] ? mark_held_locks+0x49/0x70 ? _raw_spin_unlock_irqrestore+0x44/0x60 __vsock_release+0x2d/0x130 [vsock] __vsock_release+0xb9/0x130 [vsock] vsock_release+0x12/0x30 [vsock] __sock_release+0x3d/0xb0 sock_close+0x14/0x20 __fput+0xc1/0x250 task_work_run+0x93/0xb0 exit_to_userm...
2013 Jan 15
0
nouveau lockdep splat on init
...40.864179] Pid: 524, comm: modprobe Tainted: G W 3.8.0-rc3-patser+ #915 [ 40.864179] Call Trace: [ 40.864179] [<ffffffff8109cd63>] __lock_acquire+0x783/0x1d90 [ 40.864179] [<ffffffff8109c9cf>] ? __lock_acquire+0x3ef/0x1d90 [ 40.864179] [<ffffffff8109b4d2>] ? mark_held_locks+0x82/0x130 [ 40.864179] [<ffffffff8135160e>] ? trace_hardirqs_on_thunk+0x3a/0x3f [ 40.864179] [<ffffffff8109e8e6>] lock_acquire+0x96/0xc0 [ 40.864179] [<ffffffffa0333ba3>] ? nouveau_instobj_create_+0x43/0x90 [nouveau] [ 40.864179] [<ffffffffa02fc3fc>] ? nouveau_...
2012 Aug 24
4
[PATCH] Btrfs: pass lockdep rwsem metadata to async commit transaction
The freeze rwsem is taken by sb_start_intwrite() and dropped during the commit_ or end_transaction(). In the async case, that happens in a worker thread. Tell lockdep the calling thread is releasing ownership of the rwsem and the async thread is picking it up. Josef and I worked out a more complicated solution that made the async commit thread join and potentially get a later transaction, but
2018 Aug 05
2
[PATCH net-next 0/6] virtio_net: Add ethtool stat items
...ceive_buf+0x2e30/0x2e30 [virtio_net] [ 46.166796] ? sched_clock_cpu+0x18/0x2b0 [ 46.166809] ? print_irqtrace_events+0x280/0x280 [ 46.166817] ? print_irqtrace_events+0x280/0x280 [ 46.166830] ? rcu_process_callbacks+0xc5e/0x12d0 [ 46.166838] ? kvm_clock_read+0x1f/0x30 [ 46.166857] ? mark_held_locks+0xd5/0x170 [ 46.166867] ? net_rx_action+0x2aa/0x10e0 [ 46.166882] net_rx_action+0x4bc/0x10e0 [ 46.166906] ? napi_complete_done+0x480/0x480 [ 46.166925] ? print_irqtrace_events+0x280/0x280 [ 46.166935] ? sched_clock+0x5/0x10 [ 46.166952] ? __lock_is_held+0xcb/0x1a0 [ 46.166982]...
2018 Aug 05
2
[PATCH net-next 0/6] virtio_net: Add ethtool stat items
...ceive_buf+0x2e30/0x2e30 [virtio_net] [ 46.166796] ? sched_clock_cpu+0x18/0x2b0 [ 46.166809] ? print_irqtrace_events+0x280/0x280 [ 46.166817] ? print_irqtrace_events+0x280/0x280 [ 46.166830] ? rcu_process_callbacks+0xc5e/0x12d0 [ 46.166838] ? kvm_clock_read+0x1f/0x30 [ 46.166857] ? mark_held_locks+0xd5/0x170 [ 46.166867] ? net_rx_action+0x2aa/0x10e0 [ 46.166882] net_rx_action+0x4bc/0x10e0 [ 46.166906] ? napi_complete_done+0x480/0x480 [ 46.166925] ? print_irqtrace_events+0x280/0x280 [ 46.166935] ? sched_clock+0x5/0x10 [ 46.166952] ? __lock_is_held+0xcb/0x1a0 [ 46.166982]...
2018 May 02
0
[PATCH] drm/nouveau: Fix deadlock in nv50_mstm_register_connector()
...ched+0x15/0x30 ? ww_mutex_lock+0x43/0x80 ? drm_modeset_lock+0xb2/0x130 [drm] ? drm_fb_helper_add_one_connector+0x2a/0x60 [drm_kms_helper] drm_fb_helper_add_one_connector+0x2a/0x60 [drm_kms_helper] nv50_mstm_register_connector+0x2c/0x50 [nouveau] drm_dp_add_port+0x2f5/0x420 [drm_kms_helper] ? mark_held_locks+0x50/0x80 ? kfree+0xcf/0x2a0 ? drm_dp_check_mstb_guid+0xd6/0x120 [drm_kms_helper] ? trace_hardirqs_on_caller+0xed/0x180 ? drm_dp_check_mstb_guid+0xd6/0x120 [drm_kms_helper] drm_dp_send_link_address+0x155/0x1e0 [drm_kms_helper] drm_dp_add_port+0x33f/0x420 [drm_kms_helper] ? nouveau_connector...
2013 Jul 01
1
[PATCH] drm/nouveau: fix locking in nouveau_crtc_page_flip
...dump_stack+0x19/0x1b [<ffffffff816e5f4f>] print_circular_bug+0x1fb/0x20c [<ffffffff810b9729>] __lock_acquire+0x1c29/0x1c2b [<ffffffff810b9dbd>] lock_acquire+0x90/0x1f9 [<ffffffffa0346b66>] ? nouveau_bo_move_m2mf.isra.13+0x4d/0x130 [nouveau] [<ffffffff810ba731>] ? mark_held_locks+0x6d/0x117 [<ffffffff816ed517>] mutex_lock_nested+0x56/0x3bb [<ffffffffa0346b66>] ? nouveau_bo_move_m2mf.isra.13+0x4d/0x130 [nouveau] [<ffffffff810ba99e>] ? trace_hardirqs_on+0xd/0xf [<ffffffffa0346b66>] nouveau_bo_move_m2mf.isra.13+0x4d/0x130 [nouveau] [<ffffffffa0...
2018 Aug 06
1
[PATCH v4 7/8] drm/nouveau: Fix deadlocks in nouveau_connector_detect()
...00080 > [ 861.499045] Workqueue: pm pm_runtime_work > [ 861.499739] Call Trace: > [ 861.500428] __schedule+0x322/0xaf0 > [ 861.501134] ? wait_for_completion+0x104/0x190 > [ 861.501851] schedule+0x33/0x90 > [ 861.502564] schedule_timeout+0x3a5/0x590 > [ 861.503284] ? mark_held_locks+0x58/0x80 > [ 861.503988] ? _raw_spin_unlock_irq+0x2c/0x40 > [ 861.504710] ? wait_for_completion+0x104/0x190 > [ 861.505417] ? trace_hardirqs_on_caller+0xf4/0x190 > [ 861.506136] ? wait_for_completion+0x104/0x190 > [ 861.506845] wait_for_completion+0x12c/0x190 > [ 861....
2018 Jul 16
0
[PATCH 2/5] drm/nouveau: Grab RPM ref while probing outputs
...r/0:1 D 0 60 2 0x80000000 [ 246.703293] Workqueue: pm pm_runtime_work [ 246.704393] Call Trace: [ 246.705403] __schedule+0x322/0xaf0 [ 246.706439] ? wait_for_completion+0x104/0x190 [ 246.707393] schedule+0x33/0x90 [ 246.708375] schedule_timeout+0x3a5/0x590 [ 246.709289] ? mark_held_locks+0x58/0x80 [ 246.710208] ? _raw_spin_unlock_irq+0x2c/0x40 [ 246.711222] ? wait_for_completion+0x104/0x190 [ 246.712134] ? trace_hardirqs_on_caller+0xf4/0x190 [ 246.713094] ? wait_for_completion+0x104/0x190 [ 246.713964] wait_for_completion+0x12c/0x190 [ 246.714895] ? wake_up_q+0x80/0x80...
2018 Aug 05
0
[PATCH net-next 0/6] virtio_net: Add ethtool stat items
...o_net] > [?? 46.166796]? ? sched_clock_cpu+0x18/0x2b0 > [?? 46.166809]? ? print_irqtrace_events+0x280/0x280 > [?? 46.166817]? ? print_irqtrace_events+0x280/0x280 > [?? 46.166830]? ? rcu_process_callbacks+0xc5e/0x12d0 > [?? 46.166838]? ? kvm_clock_read+0x1f/0x30 > [?? 46.166857]? ? mark_held_locks+0xd5/0x170 > [?? 46.166867]? ? net_rx_action+0x2aa/0x10e0 > [?? 46.166882]? net_rx_action+0x4bc/0x10e0 > [?? 46.166906]? ? napi_complete_done+0x480/0x480 > [?? 46.166925]? ? print_irqtrace_events+0x280/0x280 > [?? 46.166935]? ? sched_clock+0x5/0x10 > [?? 46.166952]? ? __lock_is_h...