Displaying 20 results from an estimated 58 matches for "0x650".
Did you mean:
0x60
2018 May 02
0
[PATCH] drm/nouveau: Fix deadlock in nv50_mstm_register_connector()
...155/0x1e0 [drm_kms_helper]
drm_dp_add_port+0x33f/0x420 [drm_kms_helper]
drm_dp_send_link_address+0x155/0x1e0 [drm_kms_helper]
drm_dp_check_and_send_link_address+0x87/0xd0 [drm_kms_helper]
drm_dp_mst_link_probe_work+0x4d/0x80 [drm_kms_helper]
process_one_work+0x20d/0x650
worker_thread+0x3a/0x390
kthread+0x11e/0x140
ret_from_fork+0x3a/0x50
other info that might help us debug this:
Chain exists of:
&helper->lock --> crtc_ww_class_acquire --> crtc_ww_class_mutex
Possible unsafe locking scenario:
CPU0 CPU1...
2019 Aug 06
2
Xorg indefinitely hangs in kernelspace
...[354073.717755] Not tainted 5.2.0-050200rc1-generic #201905191930
[354073.722277] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[354073.738332] Xorg D 0 920 854 0x00404004
[354073.738334] Call Trace:
[354073.738340] __schedule+0x2ba/0x650
[354073.738342] schedule+0x2d/0x90
[354073.738343] schedule_preempt_disabled+0xe/0x10
[354073.738345] __ww_mutex_lock.isra.11+0x3e0/0x750
[354073.738346] __ww_mutex_lock_slowpath+0x16/0x20
[354073.738347] ww_mutex_lock+0x34/0x50
[354073.738352] ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm]
[354073...
2019 Aug 06
2
Xorg indefinitely hangs in kernelspace
...[354073.717755] Not tainted 5.2.0-050200rc1-generic #201905191930
[354073.722277] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[354073.738332] Xorg D 0 920 854 0x00404004
[354073.738334] Call Trace:
[354073.738340] __schedule+0x2ba/0x650
[354073.738342] schedule+0x2d/0x90
[354073.738343] schedule_preempt_disabled+0xe/0x10
[354073.738345] __ww_mutex_lock.isra.11+0x3e0/0x750
[354073.738346] __ww_mutex_lock_slowpath+0x16/0x20
[354073.738347] ww_mutex_lock+0x34/0x50
[354073.738352] ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm]
[354073...
2014 Nov 25
3
Second copy engine on GF116
...to just ignore it. You can distinguish this
>> > decompress engine from normal copy engine by looking at the CE capability
>> > register on falcon (0x00000650). If bit 2 is '1', then the falcon is
>> > a decompress engine.
>>
>> I presume you mean a +0x650 register on the pcopy engines (0x104000
>> and 0x105000). I only have access to the GF108 right now, which
>> returns 3 for 0x104650 and 4 for 0x105650. We're using the engine at
>> 0x104000 for copy on the GF108...
>
> Yes, 0x104650 and 0x105650 are the right addresses,...
2010 Nov 04
4
Bug#602378: xen-hypervisor-4.0-amd64: Live migration of Guests crashes and reboots
...ff82c48037a90
(XEN) f000000000000000 ffff82f604222dc0 0000000000000001 00000000fffffff
(XEN) Xen call trace:
(XEN) [<ffff82c4801151f6>] free_heap_pages+0x366/0x4b0
(XEN) [<ffff82c480115488>] free_domheap_pages+0x148/0x380
(XEN) [<ffff82c48015f618>] free_page_type+0x388/0x650
(XEN) [<ffff82c48015fa1c>] __put_page_type+0x13c/0x2d0
(XEN) [<ffff82c48015d179>] is_iomem_page+0x9/0x90
(XEN) [<ffff82c48015f277>] put_page_from_l2e+0xf7/0x110
(XEN) [<ffff82c48015f6ee>] free_page_type+0x45e/0x650
(XEN) [<ffff82c48014605a>] event_check_...
2014 Nov 25
0
Second copy engine on GF116
...an distinguish this
> >> > decompress engine from normal copy engine by looking at the CE capability
> >> > register on falcon (0x00000650). If bit 2 is '1', then the falcon is
> >> > a decompress engine.
> >>
> >> I presume you mean a +0x650 register on the pcopy engines (0x104000
> >> and 0x105000). I only have access to the GF108 right now, which
> >> returns 3 for 0x104650 and 4 for 0x105650. We're using the engine at
> >> 0x104000 for copy on the GF108...
> >
> > Yes, 0x104650 and 0x105650...
2019 Sep 06
4
Xorg indefinitely hangs in kernelspace
...tainted 5.2.0-050200rc1-generic #201905191930
> [354073.722277] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [354073.738332] Xorg D 0 920 854 0x00404004
> [354073.738334] Call Trace:
> [354073.738340] __schedule+0x2ba/0x650
> [354073.738342] schedule+0x2d/0x90
> [354073.738343] schedule_preempt_disabled+0xe/0x10
> [354073.738345] __ww_mutex_lock.isra.11+0x3e0/0x750
> [354073.738346] __ww_mutex_lock_slowpath+0x16/0x20
> [354073.738347] ww_mutex_lock+0x34/0x50
> [354073.738352] ttm_eu_reserve_buf...
2019 Sep 06
4
Xorg indefinitely hangs in kernelspace
...tainted 5.2.0-050200rc1-generic #201905191930
> [354073.722277] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [354073.738332] Xorg D 0 920 854 0x00404004
> [354073.738334] Call Trace:
> [354073.738340] __schedule+0x2ba/0x650
> [354073.738342] schedule+0x2d/0x90
> [354073.738343] schedule_preempt_disabled+0xe/0x10
> [354073.738345] __ww_mutex_lock.isra.11+0x3e0/0x750
> [354073.738346] __ww_mutex_lock_slowpath+0x16/0x20
> [354073.738347] ww_mutex_lock+0x34/0x50
> [354073.738352] ttm_eu_reserve_buf...
2016 Nov 17
2
Panic: file dsync-brain-mailbox.c: line 814 ...
...vecot.so.0(+0x9438e)
[0x7f3ccceb238e] -> /usr/lib/dovecot/libdovecot.so.0(+0x9447c)
[0x7f3ccceb247c] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0)
[0x7f3ccce4ba4e] ->
dovecot/doveadm-server(dsync_brain_slave_recv_mailbox+0x3d8)
[0x7f3ccd8f66f8] -> dovecot/doveadm-server(dsync_brain_run+0x650)
[0x7f3ccd8f4110] -> dovecot/doveadm-server(+0x4143b) [0x7f3ccd8f443b] ->
dovecot/doveadm-server(+0x5735f) [0x7f3ccd90a35f] ->
/usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x4c) [0x7f3cccec6bdc]
-> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x10a)
[0x7f3cccec809a]...
2014 Nov 26
1
Second copy engine on GF116
...sh this
>>>>> decompress engine from normal copy engine by looking at the CE capability
>>>>> register on falcon (0x00000650). If bit 2 is '1', then the falcon is
>>>>> a decompress engine.
>>>>
>>>> I presume you mean a +0x650 register on the pcopy engines (0x104000
>>>> and 0x105000). I only have access to the GF108 right now, which
>>>> returns 3 for 0x104650 and 4 for 0x105650. We're using the engine at
>>>> 0x104000 for copy on the GF108...
>>>
>>> Yes, 0x1046...
2014 Nov 21
3
Second copy engine on GF116
...e people.
>
> It is probably easiest to just ignore it. You can distinguish this
> decompress engine from normal copy engine by looking at the CE capability
> register on falcon (0x00000650). If bit 2 is '1', then the falcon is
> a decompress engine.
I presume you mean a +0x650 register on the pcopy engines (0x104000
and 0x105000). I only have access to the GF108 right now, which
returns 3 for 0x104650 and 4 for 0x105650. We're using the engine at
0x104000 for copy on the GF108...
>From my admittedly limited understanding, both 0x104000 and 0x105000
appear to be f...
2019 Sep 06
0
[Spice-devel] Xorg indefinitely hangs in kernelspace
...generic #201905191930
> > [354073.722277] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > disables this message.
> > [354073.738332] Xorg D 0 920 854 0x00404004
> > [354073.738334] Call Trace:
> > [354073.738340] __schedule+0x2ba/0x650
> > [354073.738342] schedule+0x2d/0x90
> > [354073.738343] schedule_preempt_disabled+0xe/0x10
> > [354073.738345] __ww_mutex_lock.isra.11+0x3e0/0x750
> > [354073.738346] __ww_mutex_lock_slowpath+0x16/0x20
> > [354073.738347] ww_mutex_lock+0x34/0x50
> > [3540...
2013 Aug 27
7
[PATCH] Btrfs: fix deadlock in uuid scan kthread
...e3e54>] __btrfs_write_out_cache+0x8c4/0xa80 [btrfs]
[36700.671704] [<ffffffffa05e4362>] btrfs_write_out_cache+0xb2/0xf0 [btrfs]
[36700.671710] [<ffffffffa05c4441>] ? free_extent_buffer+0x61/0xc0 [btrfs]
[36700.671716] [<ffffffffa0594c82>] btrfs_write_dirty_block_groups+0x562/0x650 [btrfs]
[36700.671723] [<ffffffffa0610092>] commit_cowonly_roots+0x171/0x24b [btrfs]
[36700.671729] [<ffffffffa05a4dde>] btrfs_commit_transaction+0x4fe/0xa10 [btrfs]
[36700.671735] [<ffffffffa0610af3>] create_subvol+0x5c0/0x636 [btrfs]
[36700.671742] [<ffffffffa05d49ff>]...
2018 Dec 20
1
4.20-rc6: WARNING: CPU: 30 PID: 197360 at net/core/flow_dissector.c:764 __skb_flow_dissect
...gt; > [280155.348610] fib_multipath_hash+0x28c/0x2d0
> > > [280155.348613] ? fib_multipath_hash+0x28c/0x2d0
> > > [280155.348619] fib_select_path+0x241/0x32f
> > > [280155.348622] ? __fib_lookup+0x6a/0xb0
> > > [280155.348626] ip_route_output_key_hash_rcu+0x650/0xa30
> > > [280155.348631] ? __alloc_skb+0x9b/0x1d0
> > > [280155.348634] inet_rtm_getroute+0x3f7/0xb80
> >
> > inet_rtm_getroute builds a new packet with inet_rtm_getroute_build_skb
> > here without dev or sk.
>
> Ack
>
> >
> > > Probl...
2012 Mar 24
2
Bug#665433: xen hypervisor FATAL PAGE FAULT after linux kernel BUG: unable to handle kernel paging request
...000ef29 0000ef29
(XEN) 00000011 000e0003 c1002227 00000061 00000246 de6b5cfc 00000069 0000007b
(XEN) 0000007b 000000d8 000000e0 00000000 ffbd8000
(XEN) Xen call trace:
(XEN) [<0fff0000>] ???
(XEN) [<ff15004d>] move_masked_irq+0x9d/0xc0
(XEN) [<ff14e079>] do_IRQ+0x89/0x650
(XEN) [<ff111a16>] do_multicall+0x156/0x2c0
(XEN) [<ff16963b>] do_page_fault+0x10b/0x320
(XEN) [<ff147642>] common_interrupt+0x52/0x60
(XEN) [<ff1ccc93>] hypercall+0x53/0x9b
(XEN)
(XEN) Pagetable walk from 0fff0000:
(XEN) L3[0x000] = 00000000d147f001 0001f01...
2019 Aug 02
1
nouveau problem
...+0x62/0x190
Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffff94ab4796>]
__rpm_callback+0x36/0x80
Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffff94ab4804>]
rpm_callback+0x24/0x80
Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffff94ab4981>]
rpm_suspend+0x121/0x650
Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffff94ab5fea>]
pm_runtime_work+0x8a/0xc0
Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffff946baf9f>]
process_one_work+0x17f/0x440
Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffff946bc036>]
worker_thread+0x126/0x...
2013 Apr 19
14
[GIT PULL] (xen) stable/for-jens-3.10
Hey Jens,
Please in your spare time (if there is such a thing at a conference)
pull this branch:
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.10
for your v3.10 branch. Sorry for being so late with this.
<blurb>
It has the ''feature-max-indirect-segments'' implemented in both backend
and frontend. The current problem with the backend and
2018 Dec 20
0
4.20-rc6: WARNING: CPU: 30 PID: 197360 at net/core/flow_dissector.c:764 __skb_flow_dissect
..., but from a
> different call path:
>
> [280155.348610] fib_multipath_hash+0x28c/0x2d0
> [280155.348613] ? fib_multipath_hash+0x28c/0x2d0
> [280155.348619] fib_select_path+0x241/0x32f
> [280155.348622] ? __fib_lookup+0x6a/0xb0
> [280155.348626] ip_route_output_key_hash_rcu+0x650/0xa30
> [280155.348631] ? __alloc_skb+0x9b/0x1d0
> [280155.348634] inet_rtm_getroute+0x3f7/0xb80
inet_rtm_getroute builds a new packet with inet_rtm_getroute_build_skb
here without dev or sk.
> Problem is the synthesized skb for output route resolution does not have
> skb->dev or...
2016 Nov 18
0
Panic: file dsync-brain-mailbox.c: line 814 ...
...[0x7f3ccceb238e] -> /usr/lib/dovecot/libdovecot.so.0(+0x9447c)
> [0x7f3ccceb247c] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0)
> [0x7f3ccce4ba4e] ->
> dovecot/doveadm-server(dsync_brain_slave_recv_mailbox+0x3d8)
> [0x7f3ccd8f66f8] -> dovecot/doveadm-server(dsync_brain_run+0x650)
> [0x7f3ccd8f4110] -> dovecot/doveadm-server(+0x4143b) [0x7f3ccd8f443b] ->
> dovecot/doveadm-server(+0x5735f) [0x7f3ccd90a35f] ->
> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x4c) [0x7f3cccec6bdc]
> -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x1...
2006 Dec 30
0
fault report
...r/sbin/smbd(api_rpcTNP+0x16d) [0x5555556b4d0d]
#13 /usr/sbin/smbd(api_pipe_request+0x168) [0x5555556b5248]
#14 /usr/sbin/smbd [0x5555556b1426]
#15 /usr/sbin/smbd [0x5555556b18bd]
#16 /usr/sbin/smbd [0x5555555ca413]
#17 /usr/sbin/smbd [0x5555555ca7f2]
#18 /usr/sbin/smbd(reply_trans+0x650) [0x5555555cb110]
#19 /usr/sbin/smbd [0x555555617ac2]
#20 /usr/sbin/smbd(smbd_process+0x720) [0x555555618aa0]
#21 /usr/sbin/smbd(main+0xa0b) [0x5555557e4efb]
#22 /lib64/libc.so.6(__libc_start_main+0xf4) [0x2b62c5f66ae4]
#23 /usr/sbin/smbd [0x5555555b21c9]
[2006/12/29 23:14:28, 0] lib...