Displaying 20 results from an estimated 42 matches for "0x670".
Did you mean:
0x60
2018 Feb 23
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...multaneously hit a
number of splats in the block layer:
* inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in
jbd2_trans_will_send_data_barrier
* BUG: sleeping function called from invalid context at mm/mempool.c:320
* WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750
... I've included the full splats at the end of the mail.
These all happen in the context of the virtio block IRQ handler, so I
wonder if this calls something that doesn't expect to be called from IRQ
context. Is it valid to call blk_mq_complete_request() or
blk_mq_end_request() fro...
2018 Feb 23
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...multaneously hit a
number of splats in the block layer:
* inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in
jbd2_trans_will_send_data_barrier
* BUG: sleeping function called from invalid context at mm/mempool.c:320
* WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750
... I've included the full splats at the end of the mail.
These all happen in the context of the virtio block IRQ handler, so I
wonder if this calls something that doesn't expect to be called from IRQ
context. Is it valid to call blk_mq_complete_request() or
blk_mq_end_request() fro...
2019 Sep 05
2
Xorg indefinitely hangs in kernelspace
On 05.09.19 10:14, Gerd Hoffmann wrote:
> On Tue, Aug 06, 2019 at 09:00:10PM +0300, Jaak Ristioja wrote:
>> Hello!
>>
>> I'm writing to report a crash in the QXL / DRM code in the Linux kernel.
>> I originally filed the issue on LaunchPad and more details can be found
>> there, although I doubt whether these details are useful.
>
> Any change with kernel
2019 Sep 05
2
Xorg indefinitely hangs in kernelspace
On 05.09.19 10:14, Gerd Hoffmann wrote:
> On Tue, Aug 06, 2019 at 09:00:10PM +0300, Jaak Ristioja wrote:
>> Hello!
>>
>> I'm writing to report a crash in the QXL / DRM code in the Linux kernel.
>> I originally filed the issue on LaunchPad and more details can be found
>> there, although I doubt whether these details are useful.
>
> Any change with kernel
2018 Feb 26
0
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...in the block layer:
>
> * inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in
> jbd2_trans_will_send_data_barrier
>
> * BUG: sleeping function called from invalid context at mm/mempool.c:320
>
> * WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750
>
> ... I've included the full splats at the end of the mail.
>
> These all happen in the context of the virtio block IRQ handler, so I
> wonder if this calls something that doesn't expect to be called from IRQ
> context. Is it valid to call blk_mq_complete_request...
2019 Sep 24
0
Xorg indefinitely hangs in kernelspace
...2_ioctl+0xe/0x10 [drm]
[124212.551452] drm_ioctl_kernel+0xae/0xf0 [drm]
[124212.551458] drm_ioctl+0x234/0x3d0 [drm]
[124212.551464] ? drm_mode_cursor_ioctl+0x60/0x60 [drm]
[124212.551466] ? timerqueue_add+0x5f/0xa0
[124212.551469] ? enqueue_hrtimer+0x3d/0x90
[124212.551471] do_vfs_ioctl+0x407/0x670
[124212.551473] ? fput+0x13/0x20
[124212.551475] ? __sys_recvmsg+0x88/0xa0
[124212.551476] ksys_ioctl+0x67/0x90
[124212.551477] __x64_sys_ioctl+0x1a/0x20
[124212.551479] do_syscall_64+0x5a/0x130
[124212.551480] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[124212.551481] RIP: 0033:0x7f07c79ee417...
2016 May 25
3
[PATCH] x86/paravirt: Do not trace _paravirt_ident_*() functions
...Call Trace:
[<ffffffff81cc5f47>] _raw_spin_lock+0x27/0x30
[<ffffffff8122c15b>] handle_pte_fault+0x13db/0x16b0
[<ffffffff811bf4cb>] ? function_trace_call+0x15b/0x180
[<ffffffff8122ad85>] ? handle_pte_fault+0x5/0x16b0
[<ffffffff8122e322>] handle_mm_fault+0x312/0x670
[<ffffffff81231068>] ? find_vma+0x68/0x70
[<ffffffff810ab741>] __do_page_fault+0x1b1/0x4e0
[<ffffffff810aba92>] do_page_fault+0x22/0x30
[<ffffffff81cc7f68>] page_fault+0x28/0x30
[<ffffffff81574af5>] ? copy_user_enhanced_fast_string+0x5/0x10
[<ffffffff812...
2016 May 25
3
[PATCH] x86/paravirt: Do not trace _paravirt_ident_*() functions
...Call Trace:
[<ffffffff81cc5f47>] _raw_spin_lock+0x27/0x30
[<ffffffff8122c15b>] handle_pte_fault+0x13db/0x16b0
[<ffffffff811bf4cb>] ? function_trace_call+0x15b/0x180
[<ffffffff8122ad85>] ? handle_pte_fault+0x5/0x16b0
[<ffffffff8122e322>] handle_mm_fault+0x312/0x670
[<ffffffff81231068>] ? find_vma+0x68/0x70
[<ffffffff810ab741>] __do_page_fault+0x1b1/0x4e0
[<ffffffff810aba92>] do_page_fault+0x22/0x30
[<ffffffff81cc7f68>] page_fault+0x28/0x30
[<ffffffff81574af5>] ? copy_user_enhanced_fast_string+0x5/0x10
[<ffffffff812...
2012 Sep 12
2
Deadlock in btrfs-cleaner, related to snapshot deletion
...ffffffa00cc9f8>] btrfs_search_slot+0x368/0x740 [btrfs]
[ 386.318502] [<ffffffffa00d32de>] lookup_inline_extent_backref+0x8e/0x4c0 [btrfs]
[ 386.318532] [<ffffffffa00d3770>] lookup_extent_backref+0x60/0xf0 [btrfs]
[ 386.318561] [<ffffffffa00d5c55>] __btrfs_free_extent+0xb5/0x670 [btrfs]
[ 386.318592] [<ffffffffa00d6324>] run_delayed_tree_ref+0x114/0x190 [btrfs]
[ 386.318623] [<ffffffffa00dac2e>] run_one_delayed_ref+0xde/0xf0 [btrfs]
[ 386.318654] [<ffffffffa00dad76>] run_clustered_refs+0x136/0x3d0 [btrfs]
[ 386.318685] [<ffffffffa00db100>] b...
2017 Dec 02
0
nouveau: refcount_t splat on 4.15-rc1 on nv50
...able_device_flags+0x155/0x200
[ 10.049806] drm_get_pci_dev+0xde/0x2c0
[ 10.053874] nouveau_drm_probe+0x1b9/0x240 [nouveau]
[ 10.058986] ? __pm_runtime_resume+0x68/0xb0
[ 10.063409] local_pci_probe+0x5e/0xf0
[ 10.067300] work_for_cpu_fn+0x10/0x30
[ 10.071183] process_one_work+0x21a/0x670
[ 10.075325] worker_thread+0x256/0x500
[ 10.079208] ? manage_workers+0x1e0/0x1e0
[ 10.083362] kthread+0x169/0x220
[ 10.086730] ? kthread_create_worker_on_cpu+0x40/0x40
[ 10.091933] ret_from_fork+0x1f/0x30
[ 10.095655] Code: ff 84 c0 74 02 5b c3 0f b6 1d 59 b2 a6 01 80 fb 01 77 1c 8...
2018 Feb 26
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...; > * inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in
> > jbd2_trans_will_send_data_barrier
> >
> > * BUG: sleeping function called from invalid context at mm/mempool.c:320
> >
> > * WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750
> >
> > ... I've included the full splats at the end of the mail.
> >
> > These all happen in the context of the virtio block IRQ handler, so I
> > wonder if this calls something that doesn't expect to be called from IRQ
> > context. Is it valid t...
2018 Feb 26
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...; > * inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in
> > jbd2_trans_will_send_data_barrier
> >
> > * BUG: sleeping function called from invalid context at mm/mempool.c:320
> >
> > * WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750
> >
> > ... I've included the full splats at the end of the mail.
> >
> > These all happen in the context of the virtio block IRQ handler, so I
> > wonder if this calls something that doesn't expect to be called from IRQ
> > context. Is it valid t...
2018 Mar 05
0
[PATCH v2 0/7] Modernize vga_switcheroo by using device link for HDA
...srcu_invoke_callbacks+0xa2/0x150
acpi_pci_irq_lookup+0x27/0x2e0
acpi_pci_irq_disable+0x45/0xb0
pci_release_dev+0x29/0x60
device_release+0x2d/0x80
kobject_put+0xb7/0x190
__device_link_free_srcu+0x32/0x40
srcu_invoke_callbacks+0xba/0x150
process_one_work+0x273/0x670
worker_thread+0x4a/0x400
kthread+0x100/0x140
? process_one_work+0x670/0x670
? kthread_create_worker_on_cpu+0x50/0x50
? do_syscall_64+0x56/0x1a0
? SyS_exit_group+0x10/0x10
Issue 9 - potential memory corruption.
At some point (possibly after issue 7, but I am not fully...
2019 Sep 30
2
[Spice-devel] Xorg indefinitely hangs in kernelspace
...24212.551452] drm_ioctl_kernel+0xae/0xf0 [drm]
> [124212.551458] drm_ioctl+0x234/0x3d0 [drm]
> [124212.551464] ? drm_mode_cursor_ioctl+0x60/0x60 [drm]
> [124212.551466] ? timerqueue_add+0x5f/0xa0
> [124212.551469] ? enqueue_hrtimer+0x3d/0x90
> [124212.551471] do_vfs_ioctl+0x407/0x670
> [124212.551473] ? fput+0x13/0x20
> [124212.551475] ? __sys_recvmsg+0x88/0xa0
> [124212.551476] ksys_ioctl+0x67/0x90
> [124212.551477] __x64_sys_ioctl+0x1a/0x20
> [124212.551479] do_syscall_64+0x5a/0x130
> [124212.551480] entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [12421...
2017 Apr 11
0
[Bug 99900] [NVC1] nouveau: freeze / crash after kernel update to 4.10
..._resume+0xa0/0xa0
[ 49.196384] ? intel_runtime_suspend+0x142/0x250 [i915]
[ 49.196386] ? pci_pm_runtime_suspend+0x50/0x140
[ 49.196387] ? __rpm_callback+0xb1/0x1f0
[ 49.196389] ? rpm_callback+0x1a/0x70
[ 49.196390] ? pci_pm_runtime_resume+0xa0/0xa0
[ 49.196392] ? rpm_suspend+0x11d/0x670
[ 49.196396] ? _raw_write_unlock_irq+0xe/0x20
[ 49.196400] ? finish_task_switch+0xa7/0x260
[ 49.196403] ? __update_idle_core+0x1b/0xb0
[ 49.196405] ? pm_runtime_work+0x62/0xa0
[ 49.196407] ? process_one_work+0x133/0x480
[ 49.196408] ? worker_thread+0x42/0x4c0
[ 49.196411] ? kth...
2018 Feb 26
0
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...RDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in
> > > jbd2_trans_will_send_data_barrier
> > >
> > > * BUG: sleeping function called from invalid context at mm/mempool.c:320
> > >
> > > * WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750
> > >
> > > ... I've included the full splats at the end of the mail.
> > >
> > > These all happen in the context of the virtio block IRQ handler, so I
> > > wonder if this calls something that doesn't expect to be called from IRQ
> &g...
2012 Aug 24
4
[PATCH] Btrfs: pass lockdep rwsem metadata to async commit transaction
The freeze rwsem is taken by sb_start_intwrite() and dropped during the
commit_ or end_transaction(). In the async case, that happens in a worker
thread. Tell lockdep the calling thread is releasing ownership of the
rwsem and the async thread is picking it up.
Josef and I worked out a more complicated solution that made the async
commit thread join and potentially get a later transaction, but
2013 May 07
2
Kernel BUG: __tree_mod_log_rewind
...8800338d3000 0000000000000001
May 7 02:09:21 caper kernel: [ 726.752038] Call Trace:
May 7 02:09:21 caper kernel: [ 726.752135] [<ffffffffa00ea5ef>]
tree_mod_log_rewind+0xdf/0x240 [btrfs]
May 7 02:09:21 caper kernel: [ 726.752237] [<ffffffffa00f25cb>]
btrfs_search_old_slot+0x4cb/0x670 [btrfs]
May 7 02:09:21 caper kernel: [ 726.752351] [<ffffffffa016d118>]
__resolve_indirect_ref+0xc8/0x150 [btrfs]
May 7 02:09:21 caper kernel: [ 726.752462] [<ffffffffa016d23e>]
__resolve_indirect_refs+0x9e/0x200 [btrfs]
May 7 02:09:21 caper kernel: [ 726.752573] [<ffffffffa...
2018 Jan 06
2
Centos 7 Kernel 3.10.0-693.11.6.el7.x86_64 does not boot PV
On 01/06/2018 03:16 AM, Dmitry Melekhov wrote:
> The same problem with latest centos 6 kernel,i.e. with meltdown fix.
>
> I can't see console output, because I have it on "cloud" provider
> hosting :-)
>
>
>
> 06.01.2018 05:13, Shaun Reitan ?????:
>> Broken!
>>
>>
For those of you looking for a PV enabled client Kernel for CentOS Linux
2018 Jan 09
1
Centos 7 Kernel 3.10.0-693.11.6.el7.x86_64 does not boot PV
...t;ffffffff812906c3>] __blk_run_queue+0x33/0x40
> [? 587.145018]? [<ffffffff812906f9>] blk_start_queue+0x29/0x40
> [? 587.145018]? [<ffffffffa0000eb1>]
> kick_pending_request_queues+0x21/0x30 [xen_blkfront]
> [? 587.145018]? [<ffffffffa0001487>] blkif_interrupt+0x5c7/0x670
> [xen_blkfront]
> [? 587.145018]? [<ffffffff810f735e>] handle_irq_event_percpu+0x3e/0x1e0
> [? 587.145018]? [<ffffffff810f753d>] handle_irq_event+0x3d/0x60
> [? 587.145018]? [<ffffffff810fa1c7>] handle_edge_irq+0x77/0x130
> [? 587.145018]? [<ffffffff81363527>...