Displaying 20 results from an estimated 115 matches for "_raw_spin_unlock_irqrestore".
2018 Jan 31
2
swiotlb buffer is full
...alidate_caches+0x10/0x10
[ +0.000009] ? nouveau_gem_new+0x100/0x100
[ +0.000004] nouveau_gem_new+0x49/0x100
[ +0.000009] nouveau_gem_ioctl_new+0x41/0xc0
[ +0.000009] drm_ioctl_kernel+0x59/0xb0
[ +0.000008] drm_ioctl+0x2c1/0x350
[ +0.000007] ? nouveau_gem_new+0x100/0x100
[ +0.000012] ? _raw_spin_unlock_irqrestore+0x4d/0x90
[ +0.000006] ? preempt_count_sub+0x9b/0xd0
[ +0.000005] ? _raw_spin_unlock_irqrestore+0x6b/0x90
[ +0.000008] nouveau_drm_ioctl+0x64/0xc0
[ +0.000009] do_vfs_ioctl+0x8e/0x690
[ +0.000007] ? __fget+0x116/0x200
[ +0.000010] SyS_ioctl+0x74/0x80
[ +0.000009] entry_SYSCALL_64_fast...
2014 Oct 13
2
kernel crashes after soft lockups in xen domU
...evtchn_callback+0x9/0xa
[354008.101006] [<ffffffff81006d22>] ? check_events+0x12/0x20
[354008.101011] [<ffffffff81006d0f>] ?
xen_restore_fl_direct_reloc+0x4/0x4
[354008.101017] [<ffffffff81071153>] ? arch_local_irq_restore+0x7/0x8
[354008.101024] [<ffffffff8135049f>] ?
_raw_spin_unlock_irqrestore+0xe/0xf
[354008.101031] [<ffffffff810be895>] ? release_pages+0xf4/0x14d
[354008.101038] [<ffffffff810de78b>] ?
free_pages_and_swap_cache+0x48/0x60
[354008.101045] [<ffffffff810cf527>] ? tlb_flush_mmu+0x37/0x50
[354008.101049] [<ffffffff810cf54c>] ? tlb_finish_mmu+0xc/0x...
2016 Jun 16
2
[PATCH v7 00/12] Support non-lru page migration
...f8111f405>] ? debug_show_all_locks+0x226/0x226
> kernel: [<ffffffff811f0d6b>] ? warn_alloc_failed+0x24c/0x24c
> kernel: [<ffffffff81110ffc>] ? finish_wait+0x1a4/0x1b0
> kernel: [<ffffffff81122faf>] ? lock_acquire+0xec/0x147
> kernel: [<ffffffff81d32ed0>] ? _raw_spin_unlock_irqrestore+0x3b/0x5c
> kernel: [<ffffffff81d32edc>] ? _raw_spin_unlock_irqrestore+0x47/0x5c
> kernel: [<ffffffff81110ffc>] ? finish_wait+0x1a4/0x1b0
> kernel: [<ffffffff8128f73a>] khugepaged+0x1d4/0x484f
> kernel: [<ffffffff8128f566>] ? hugepage_vma_revalidate+0xef/0xef...
2016 Jun 16
2
[PATCH v7 00/12] Support non-lru page migration
...f8111f405>] ? debug_show_all_locks+0x226/0x226
> kernel: [<ffffffff811f0d6b>] ? warn_alloc_failed+0x24c/0x24c
> kernel: [<ffffffff81110ffc>] ? finish_wait+0x1a4/0x1b0
> kernel: [<ffffffff81122faf>] ? lock_acquire+0xec/0x147
> kernel: [<ffffffff81d32ed0>] ? _raw_spin_unlock_irqrestore+0x3b/0x5c
> kernel: [<ffffffff81d32edc>] ? _raw_spin_unlock_irqrestore+0x47/0x5c
> kernel: [<ffffffff81110ffc>] ? finish_wait+0x1a4/0x1b0
> kernel: [<ffffffff8128f73a>] khugepaged+0x1d4/0x484f
> kernel: [<ffffffff8128f566>] ? hugepage_vma_revalidate+0xef/0xef...
2014 Oct 13
2
v3.17, i915 vs nouveau: possible recursive locking detected
...ocked+0xe4/0x120 [drm]
[<ffffffffa001ce2a>] drm_gem_handle_delete+0xba/0x110 [drm]
[<ffffffffa001d495>] drm_gem_close_ioctl+0x25/0x30 [drm]
[<ffffffffa001df0c>] drm_ioctl+0x1ec/0x660 [drm]
[<ffffffff8148e4b2>] ? __pm_runtime_resume+0x32/0x60
[<ffffffff817102fd>] ? _raw_spin_unlock_irqrestore+0x5d/0x70
[<ffffffff810df15d>] ? trace_hardirqs_on_caller+0xfd/0x1c0
[<ffffffff810df22d>] ? trace_hardirqs_on+0xd/0x10
[<ffffffff817102e2>] ? _raw_spin_unlock_irqrestore+0x42/0x70
[<ffffffffa0290bd4>] nouveau_drm_ioctl+0x54/0xc0 [nouveau]
[<ffffffff812072a0>] do_v...
2018 Feb 01
1
swiotlb buffer is full
...new+0x100/0x100
>> [ +0.000004] nouveau_gem_new+0x49/0x100
>> [ +0.000009] nouveau_gem_ioctl_new+0x41/0xc0
>> [ +0.000009] drm_ioctl_kernel+0x59/0xb0
>> [ +0.000008] drm_ioctl+0x2c1/0x350
>> [ +0.000007] ? nouveau_gem_new+0x100/0x100
>> [ +0.000012] ? _raw_spin_unlock_irqrestore+0x4d/0x90
>> [ +0.000006] ? preempt_count_sub+0x9b/0xd0
>> [ +0.000005] ? _raw_spin_unlock_irqrestore+0x6b/0x90
>> [ +0.000008] nouveau_drm_ioctl+0x64/0xc0
>> [ +0.000009] do_vfs_ioctl+0x8e/0x690
>> [ +0.000007] ? __fget+0x116/0x200
>> [ +0.000010] Sy...
2014 Nov 05
0
kernel crashes after soft lockups in xen domU
...354008.101006] [<ffffffff81006d22>] ? check_events+0x12/0x20
> [354008.101011] [<ffffffff81006d0f>] ?
> xen_restore_fl_direct_reloc+0x4/0x4
> [354008.101017] [<ffffffff81071153>] ? arch_local_irq_restore+0x7/0x8
> [354008.101024] [<ffffffff8135049f>] ?
> _raw_spin_unlock_irqrestore+0xe/0xf
> [354008.101031] [<ffffffff810be895>] ? release_pages+0xf4/0x14d
> [354008.101038] [<ffffffff810de78b>] ?
> free_pages_and_swap_cache+0x48/0x60
> [354008.101045] [<ffffffff810cf527>] ? tlb_flush_mmu+0x37/0x50
> [354008.101049] [<ffffffff810cf54c>...
2012 Oct 22
4
xen_evtchn_do_upcall
...0.058 us | gnttab_release_grant_reference();
1) | dev_kfree_skb_irq() {
1) 0.050 us | raise_softirq_irqoff();
1) 0.456 us | }
1) 3.714 us | }
1) 0.102 us | _raw_spin_unlock_irqrestore();
1) 4.857 us | }
1) 0.061 us | note_interrupt();
1) 5.571 us | }
1) 0.054 us | _raw_spin_lock();
1) 6.707 us | }
1) 0.083 us | _raw_spin_unlock();
1) + 10.083 us | }
1) + 10.985 us |...
2016 Apr 19
0
Bug#820862: AW: Bug#820862: Acknowledgement (xen-hypervisor-4.4-amd64: Xen VM on Jessie freezes often with INFO: task jbd2/xvda2-8:111 blocked for more than 120 seconds)
...0/0x30
[ 1920.052257] [<ffffffffa0054b49>] ? jbd2_journal_commit_transaction+0xe79/0x1950 [jbd2]
[ 1920.052320] [<ffffffff8100331e>] ? xen_end_context_switch+0xe/0x20
[ 1920.052327] [<ffffffff810912f6>] ? finish_task_switch+0x46/0xf0
[ 1920.052331] [<ffffffff815141a3>] ? _raw_spin_unlock_irqrestore+0x13/0x20
[ 1920.052338] [<ffffffffa0058be2>] ? kjournald2+0xb2/0x240 [jbd2]
[ 1920.052341] [<ffffffff810a7e60>] ? prepare_to_wait_event+0xf0/0xf0
[ 1920.052347] [<ffffffffa0058b30>] ? commit_timeout+0x10/0x10 [jbd2]
[ 1920.052353] [<ffffffff8108809d>] ? kthread+0xbd/0xe...
2016 Jun 15
0
[PATCH v7 00/12] Support non-lru page migration
...ff880017ad9aa8 736761742d6f6e2c 1ffff1002248de34 ffff880017ad9a90
[ 315.147113] 0000069a1246f660 000000000000069a ffff880005114000 ffffea0002ff0180
[ 315.147143] Call Trace:
[ 315.147154] [<ffffffffa02c3de8>] ? obj_to_head+0x9d/0x9d [zsmalloc]
[ 315.147175] [<ffffffff81d31dbc>] ? _raw_spin_unlock_irqrestore+0x47/0x5c
[ 315.147195] [<ffffffff812275b1>] ? isolate_freepages_block+0x2f9/0x5a6
[ 315.147213] [<ffffffff8127f15c>] ? kasan_poison_shadow+0x2f/0x31
[ 315.147230] [<ffffffff8127f66a>] ? kasan_alloc_pages+0x39/0x3b
[ 315.147246] [<ffffffff812267e6>] ? map_pages+0x1f3...
2016 Jun 16
2
[PATCH v7 00/12] Support non-lru page migration
On Thu, Jun 16, 2016 at 11:48:27AM +0900, Sergey Senozhatsky wrote:
> Hi,
>
> On (06/16/16 08:12), Minchan Kim wrote:
> > > [ 315.146533] kasan: CONFIG_KASAN_INLINE enabled
> > > [ 315.146538] kasan: GPF could be caused by NULL-ptr deref or user memory access
> > > [ 315.146546] general protection fault: 0000 [#1] PREEMPT SMP KASAN
> > > [
2016 Jun 16
2
[PATCH v7 00/12] Support non-lru page migration
On Thu, Jun 16, 2016 at 11:48:27AM +0900, Sergey Senozhatsky wrote:
> Hi,
>
> On (06/16/16 08:12), Minchan Kim wrote:
> > > [ 315.146533] kasan: CONFIG_KASAN_INLINE enabled
> > > [ 315.146538] kasan: GPF could be caused by NULL-ptr deref or user memory access
> > > [ 315.146546] general protection fault: 0000 [#1] PREEMPT SMP KASAN
> > > [
2019 Dec 23
5
[PATCH net] virtio_net: CTRL_GUEST_OFFLOADS depends on CTRL_VQ
00fffe0ff0 DR7: 0000000000000400
> > Call Trace:
> > ? preempt_count_add+0x58/0xb0
> > ? _raw_spin_lock_irqsave+0x36/0x70
> > ? _raw_spin_unlock_irqrestore+0x1a/0x40
> > ? __wake_up+0x70/0x190
> > virtnet_set_features+0x90/0xf0 [virtio_net]
> > __netdev_update_features+0x271/0x980
> > ? nlmsg_notify+0x5b/0xa0
> > dev_disable_lro+0x2b/0x190
> > ? inet_netconf_notify_devconf+0xe2/0x120
> > devinet_sysctl_...
2019 Dec 23
5
[PATCH net] virtio_net: CTRL_GUEST_OFFLOADS depends on CTRL_VQ
00fffe0ff0 DR7: 0000000000000400
> > Call Trace:
> > ? preempt_count_add+0x58/0xb0
> > ? _raw_spin_lock_irqsave+0x36/0x70
> > ? _raw_spin_unlock_irqrestore+0x1a/0x40
> > ? __wake_up+0x70/0x190
> > virtnet_set_features+0x90/0xf0 [virtio_net]
> > __netdev_update_features+0x271/0x980
> > ? nlmsg_notify+0x5b/0xa0
> > dev_disable_lro+0x2b/0x190
> > ? inet_netconf_notify_devconf+0xe2/0x120
> > devinet_sysctl_...
2017 Jul 19
1
kernel-4.9.37-29.el7 (and el6)
On Mon, 17 Jul 2017, Johnny Hughes wrote:
> Are the testing kernels (kernel-4.9.37-29.el7 and kernel-4.9.37-29.el6,
> with the one config file change) working for everyone:
>
> (turn off: CONFIG_IO_STRICT_DEVMEM)
Hello.
Maybe it's not the most appropriate thread or time, but I have been
signalling it before:
4.9.* kernels do not work well for me any more (and for other people
2018 Feb 01
0
swiotlb buffer is full
....000009] ? nouveau_gem_new+0x100/0x100
> [ +0.000004] nouveau_gem_new+0x49/0x100
> [ +0.000009] nouveau_gem_ioctl_new+0x41/0xc0
> [ +0.000009] drm_ioctl_kernel+0x59/0xb0
> [ +0.000008] drm_ioctl+0x2c1/0x350
> [ +0.000007] ? nouveau_gem_new+0x100/0x100
> [ +0.000012] ? _raw_spin_unlock_irqrestore+0x4d/0x90
> [ +0.000006] ? preempt_count_sub+0x9b/0xd0
> [ +0.000005] ? _raw_spin_unlock_irqrestore+0x6b/0x90
> [ +0.000008] nouveau_drm_ioctl+0x64/0xc0
> [ +0.000009] do_vfs_ioctl+0x8e/0x690
> [ +0.000007] ? __fget+0x116/0x200
> [ +0.000010] SyS_ioctl+0x74/0x80
> [...
2020 Sep 09
0
nouveau: BUG: Invalid wait context
.../0x1cb0 [nouveau]
[ 1143.134136] ? nouveau_gem_ioctl_new+0xc0/0xc0 [nouveau]
[ 1143.134159] ? drm_ioctl_kernel+0x91/0xe0 [drm]
[ 1143.134170] drm_ioctl_kernel+0x91/0xe0 [drm]
[ 1143.134182] drm_ioctl+0x2db/0x380 [drm]
[ 1143.134211] ? nouveau_gem_ioctl_new+0xc0/0xc0 [nouveau]
[ 1143.134217] ? _raw_spin_unlock_irqrestore+0x47/0x60
[ 1143.134222] ? lockdep_hardirqs_on+0x78/0x100
[ 1143.134226] ? _raw_spin_unlock_irqrestore+0x34/0x60
[ 1143.134257] nouveau_drm_ioctl+0x56/0xb0 [nouveau]
[ 1143.134263] __x64_sys_ioctl+0x8e/0xd0
[ 1143.134267] ? lockdep_hardirqs_on+0x78/0x100
[ 1143.134271] do_syscall_64+0x33/0x40...
2016 Jun 16
0
[PATCH v7 00/12] Support non-lru page migration
...ug_show_all_locks+0x226/0x226
> > kernel: [<ffffffff811f0d6b>] ? warn_alloc_failed+0x24c/0x24c
> > kernel: [<ffffffff81110ffc>] ? finish_wait+0x1a4/0x1b0
> > kernel: [<ffffffff81122faf>] ? lock_acquire+0xec/0x147
> > kernel: [<ffffffff81d32ed0>] ? _raw_spin_unlock_irqrestore+0x3b/0x5c
> > kernel: [<ffffffff81d32edc>] ? _raw_spin_unlock_irqrestore+0x47/0x5c
> > kernel: [<ffffffff81110ffc>] ? finish_wait+0x1a4/0x1b0
> > kernel: [<ffffffff8128f73a>] khugepaged+0x1d4/0x484f
> > kernel: [<ffffffff8128f566>] ? hugepage_vma_...
2014 Jun 08
0
lockdep splat while exiting PRIME
...28/0x130 [drm]
[<ffffffffa00056aa>] drm_gem_handle_delete+0xba/0x110 [drm]
[<ffffffffa0005dc5>] drm_gem_close_ioctl+0x25/0x30 [drm]
[<ffffffffa0003a80>] drm_ioctl+0x1e0/0x5f0 [drm]
[<ffffffffa0005da0>] ? drm_gem_handle_create+0x40/0x40 [drm]
[<ffffffff815f8bbd>] ? _raw_spin_unlock_irqrestore+0x5d/0x80
[<ffffffff8109f6bd>] ? trace_hardirqs_on_caller+0x15d/0x200
[<ffffffff8109f76d>] ? trace_hardirqs_on+0xd/0x10
[<ffffffff815f8ba2>] ? _raw_spin_unlock_irqrestore+0x42/0x80
[<ffffffffa07a2175>] nouveau_drm_ioctl+0x65/0xa0 [nouveau]
[<ffffffff811a7fb0>] do_...
2014 Oct 16
0
[Intel-gfx] v3.17, i915 vs nouveau: possible recursive locking detected
...t; [<ffffffffa001ce2a>] drm_gem_handle_delete+0xba/0x110 [drm]
> [<ffffffffa001d495>] drm_gem_close_ioctl+0x25/0x30 [drm]
> [<ffffffffa001df0c>] drm_ioctl+0x1ec/0x660 [drm]
> [<ffffffff8148e4b2>] ? __pm_runtime_resume+0x32/0x60
> [<ffffffff817102fd>] ? _raw_spin_unlock_irqrestore+0x5d/0x70
> [<ffffffff810df15d>] ? trace_hardirqs_on_caller+0xfd/0x1c0
> [<ffffffff810df22d>] ? trace_hardirqs_on+0xd/0x10
> [<ffffffff817102e2>] ? _raw_spin_unlock_irqrestore+0x42/0x70
> [<ffffffffa0290bd4>] nouveau_drm_ioctl+0x54/0xc0 [nouveau]
> [<f...