Displaying 20 results from an estimated 1139 matches for "0x90".
Did you mean:
0x20
2017 Sep 08
2
GlusterFS as virtual machine storage
...365:saved_frames_unwind]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
(--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
(--
> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fbcad8d2170] (-->
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fbcad8d2c20] )))))
0-gv_openstack_1-client-1: forced unwinding frame type(GlusterFS 3.3)
op(FINODELK(30)) called at 2017-09-08 09:31:45.626882 (xid=0x27ab5)
[2017-09-08 09:31:48.382451] E [MSGID: 114031]
[client-rpc-fops.c:159...
2017 Sep 08
0
GlusterFS as virtual machine storage
...; (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
> (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
> (--
>> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (-->
>> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fbcad8d2170] (-->
>> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fbcad8d2c20] )))))
> 0-gv_openstack_1-client-1: forced unwinding frame type(GlusterFS 3.3)
> op(FINODELK(30)) called at 2017-09-08 09:31:45.626882 (xid=0x27ab5)
> [2017-09-08 09:31:48.382451] E [MSGID: 114031...
2017 Sep 08
1
GlusterFS as virtual machine storage
...; (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
> (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
> (--
>> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (-->
>> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fbcad8d2170]
(-->
>> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fbcad8d2c20] )))))
> 0-gv_openstack_1-client-1: forced unwinding frame type(GlusterFS 3.3)
> op(FINODELK(30)) called at 2017-09-08 09:31:45.626882 (xid=0x27ab5)
> [2017-09-08 09:31:48.382451] E [MSGID: 114031...
2017 Sep 08
1
GlusterFS as virtual machine storage
...> (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
> (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
> (--
> > /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fbcad8d2170]
> (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fbcad8d2c20] )))))
> 0-gv_openstack_1-client-1: forced unwinding frame type(GlusterFS 3.3)
> op(FINODELK(30)) called at 2017-09-08 09:31:45.626882 (xid=0x27ab5)
> [2017-09-08 09:31:48.382451] E [MSGID: 114031]
&g...
2017 Sep 08
0
GlusterFS as virtual machine storage
...365:saved_frames_unwind]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
(--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
(--
> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fbcad8d2170] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fbcad8d2c20] )))))
0-gv_openstack_1-client-1: forced unwinding frame type(GlusterFS 3.3)
op(FINODELK(30)) called at 2017-09-08 09:31:45.626882 (xid=0x27ab5)
[2017-09-08 09:31:48.382451] E [MSGID: 114031]
[client-rpc-fops.c:159...
2017 Sep 08
3
GlusterFS as virtual machine storage
...; (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
> (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
> (--
>> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (-->
>> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fbcad8d2170]
(-->
>> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fbcad8d2c20] )))))
> 0-gv_openstack_1-client-1: forced unwinding frame type(GlusterFS 3.3)
> op(FINODELK(30)) called at 2017-09-08 09:31:45.626882 (xid=0x27ab5)
> [2017-09-08 09:31:48.382451] E [MSGID: 114031...
2017 Sep 08
3
GlusterFS as virtual machine storage
Oh, you really don't want to go below 30s, I was told.
I'm using 30 seconds for the timeout, and indeed when a node goes down
the VM freez for 30 seconds, but I've never seen them go read only for
that.
I _only_ use virtio though, maybe it's that. What are you using ?
On Fri, Sep 08, 2017 at 11:41:13AM +0200, Pavel Szalbot wrote:
> Back to replica 3 w/o arbiter. Two fio jobs
2019 Aug 06
2
Xorg indefinitely hangs in kernelspace
....0-050200rc1-generic #201905191930
[354073.722277] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[354073.738332] Xorg D 0 920 854 0x00404004
[354073.738334] Call Trace:
[354073.738340] __schedule+0x2ba/0x650
[354073.738342] schedule+0x2d/0x90
[354073.738343] schedule_preempt_disabled+0xe/0x10
[354073.738345] __ww_mutex_lock.isra.11+0x3e0/0x750
[354073.738346] __ww_mutex_lock_slowpath+0x16/0x20
[354073.738347] ww_mutex_lock+0x34/0x50
[354073.738352] ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm]
[354073.738356] qxl_release_reserve_list+0...
2019 Aug 06
2
Xorg indefinitely hangs in kernelspace
....0-050200rc1-generic #201905191930
[354073.722277] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[354073.738332] Xorg D 0 920 854 0x00404004
[354073.738334] Call Trace:
[354073.738340] __schedule+0x2ba/0x650
[354073.738342] schedule+0x2d/0x90
[354073.738343] schedule_preempt_disabled+0xe/0x10
[354073.738345] __ww_mutex_lock.isra.11+0x3e0/0x750
[354073.738346] __ww_mutex_lock_slowpath+0x16/0x20
[354073.738347] ww_mutex_lock+0x34/0x50
[354073.738352] ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm]
[354073.738356] qxl_release_reserve_list+0...
2016 Apr 13
3
Bug#820862: xen-hypervisor-4.4-amd64: Xen VM on Jessie freezes often with INFO: task jbd2/xvda2-8:111 blocked for more than 120 seconds
...c50
[ 1680.060183] Call Trace:
[ 1680.060196] [<ffffffff8113d0e0>] ? wait_on_page_read+0x60/0x60
[ 1680.060204] [<ffffffff815114a9>] ? io_schedule+0x99/0x120
[ 1680.060210] [<ffffffff8113d0ea>] ? sleep_on_page+0xa/0x10
[ 1680.060216] [<ffffffff8151182c>] ? __wait_on_bit+0x5c/0x90
[ 1680.060222] [<ffffffff8113cedf>] ? wait_on_page_bit+0x7f/0x90
[ 1680.060231] [<ffffffff810a7e90>] ? autoremove_wake_function+0x30/0x30
[ 1680.060246] [<ffffffff8114a46d>] ? pagevec_lookup_tag+0x1d/0x30
[ 1680.060254] [<ffffffff8113cfc0>] ? filemap_fdatawait_range+0xd0/0x1...
2015 Oct 06
41
[Bug 92307] New: NV50: WARNING: ... at include/drm/drm_crtc.h:1577 drm_helper_choose_encoder_dpms+0x8a/0x90 [drm_kms_helper]()
https://bugs.freedesktop.org/show_bug.cgi?id=92307
Bug ID: 92307
Summary: NV50: WARNING: ... at include/drm/drm_crtc.h:1577
drm_helper_choose_encoder_dpms+0x8a/0x90
[drm_kms_helper]()
Product: xorg
Version: git
Hardware: x86-64 (AMD64)
OS: Linux (All)
Status: NEW
Severity: normal
Priority: medium
Component: Driver/nouveau
Assignee: nouveau at...
2016 Apr 19
0
Bug#820862: AW: Bug#820862: Acknowledgement (xen-hypervisor-4.4-amd64: Xen VM on Jessie freezes often with INFO: task jbd2/xvda2-8:111 blocked for more than 120 seconds)
...920.052187] Call Trace:
[ 1920.052199] [<ffffffff811d7620>] ? generic_block_bmap+0x50/0x50
[ 1920.052208] [<ffffffff815114a9>] ? io_schedule+0x99/0x120
[ 1920.052214] [<ffffffff811d762a>] ? sleep_on_buffer+0xa/0x10
[ 1920.052220] [<ffffffff8151182c>] ? __wait_on_bit+0x5c/0x90
[ 1920.052226] [<ffffffff811d7620>] ? generic_block_bmap+0x50/0x50
[ 1920.052232] [<ffffffff815118d7>] ? out_of_line_wait_on_bit+0x77/0x90
[ 1920.052241] [<ffffffff810a7e90>] ? autoremove_wake_function+0x30/0x30
[ 1920.052257] [<ffffffffa0054b49>] ? jbd2_journal_commit_t...
2013 Jan 15
0
nouveau lockdep splat on init
...ng detected ]
[ 40.864179] 3.8.0-rc3-patser+ #915 Tainted: G W
[ 40.864179] ---------------------------------------------
[ 40.864179] modprobe/524 is trying to acquire lock:
[ 40.864179] (&subdev->mutex){+.+.+.}, at: [<ffffffffa0333ba3>] nouveau_instobj_create_+0x43/0x90 [nouveau]
[ 40.864179]
[ 40.864179] but task is already holding lock:
[ 40.864179] (&subdev->mutex){+.+.+.}, at: [<ffffffffa03467f4>] nv50_disp_data_ctor+0x94/0x160 [nouveau]
[ 40.864179]
[ 40.864179] other info that might help us debug this:
[ 40.864179] Possible unsaf...
2019 Sep 05
2
Xorg indefinitely hangs in kernelspace
On 05.09.19 10:14, Gerd Hoffmann wrote:
> On Tue, Aug 06, 2019 at 09:00:10PM +0300, Jaak Ristioja wrote:
>> Hello!
>>
>> I'm writing to report a crash in the QXL / DRM code in the Linux kernel.
>> I originally filed the issue on LaunchPad and more details can be found
>> there, although I doubt whether these details are useful.
>
> Any change with kernel
2019 Sep 05
2
Xorg indefinitely hangs in kernelspace
On 05.09.19 10:14, Gerd Hoffmann wrote:
> On Tue, Aug 06, 2019 at 09:00:10PM +0300, Jaak Ristioja wrote:
>> Hello!
>>
>> I'm writing to report a crash in the QXL / DRM code in the Linux kernel.
>> I originally filed the issue on LaunchPad and more details can be found
>> there, although I doubt whether these details are useful.
>
> Any change with kernel
2006 Jan 02
1
2.6.15-rc6 OOPS
...0000 f7caf400 f71b9df0 f71503d4 ffffffff 00000000 f7159c68
> kernel: Call Trace:
> kernel: [<c025eb29>] memcpy_toiovec+0x29/0x50
> kernel: [<c019dbda>] ext3_lookup+0x3a/0xc0
> kernel: [<c0167c8e>] real_lookup+0xae/0xd0
> kernel: [<c0167f35>] do_lookup+0x85/0x90
> kernel: [<c016872f>] __link_path_walk+0x7ef/0xdd0
> kernel: [<c0168d5e>] link_path_walk+0x4e/0xd0
> kernel: [<c016907f>] path_lookup+0x9f/0x170
> kernel: [<c01693cf>] __user_walk+0x2f/0x60
> kernel: [<c0163b5d>] vfs_stat+0x1d/0x60
> kernel: [&...
2018 Aug 06
1
[PATCH v4 7/8] drm/nouveau: Fix deadlocks in nouveau_connector_detect()
...g_task_timeout_secs" disables this message.
> [ 861.486332] kworker/0:2 D 0 61 2 0x80000000
> [ 861.487044] Workqueue: events nouveau_display_hpd_work [nouveau]
> [ 861.487737] Call Trace:
> [ 861.488394] __schedule+0x322/0xaf0
> [ 861.489070] schedule+0x33/0x90
> [ 861.489744] rpm_resume+0x19c/0x850
> [ 861.490392] ? finish_wait+0x90/0x90
> [ 861.491068] __pm_runtime_resume+0x4e/0x90
> [ 861.491753] nouveau_display_hpd_work+0x22/0x60 [nouveau]
> [ 861.492416] process_one_work+0x231/0x620
> [ 861.493068] worker_thread+0x44/0x3...
2018 Jul 16
0
[PATCH 2/5] drm/nouveau: Grab RPM ref while probing outputs
...t; /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 246.676527] kworker/4:0 D 0 37 2 0x80000000
[ 246.677580] Workqueue: events output_poll_execute [drm_kms_helper]
[ 246.678704] Call Trace:
[ 246.679753] __schedule+0x322/0xaf0
[ 246.680916] schedule+0x33/0x90
[ 246.681924] schedule_preempt_disabled+0x15/0x20
[ 246.683023] __mutex_lock+0x569/0x9a0
[ 246.684035] ? kobject_uevent_env+0x117/0x7b0
[ 246.685132] ? drm_fb_helper_hotplug_event.part.28+0x20/0xb0 [drm_kms_helper]
[ 246.686179] mutex_lock_nested+0x1b/0x20
[ 246.687278] ? mutex_lock_nes...
2017 Apr 15
1
[Bug 100691] New: [4.10] BUG: KASAN: use-after-free in drm_calc_vbltimestamp_from_scanoutpos+0x625/0x740
...eau_fence.c:148)
drm_update_vblank_count+0x16a/0x870 (drivers/gpu/drm/drm_irq.c:150)
? store_vblank+0x2c0/0x2c0 (drivers/gpu/drm/drm_irq.c:79)
drm_handle_vblank+0x14a/0x7d0 (drivers/gpu/drm/drm_irq.c:1704)
? trace_hardirqs_off+0xd/0x10 (kernel/locking/lockdep.c:2780)
? drm_crtc_wait_one_vblank+0x90/0x90 (drivers/gpu/drm/drm_irq.c:1252)
? debug_check_no_locks_freed+0x280/0x280 (kernel/locking/lockdep.c:4270)
? cpuacct_charge+0x240/0x400 (kernel/sched/cpuacct.c:349)
drm_crtc_handle_vblank+0x63/0x90 (drivers/gpu/drm/drm_irq.c:1755)
? find_next_bit+0x18/0x20 (lib/find_bit.c:63)
nouveau_displ...
2018 Aug 13
6
[PATCH v7 0/5] Fix connector probing deadlocks from RPM bugs
Latest version of https://patchwork.freedesktop.org/series/46815/ , with
one small change re: ilia
Lyude Paul (5):
drm/nouveau: Fix bogus drm_kms_helper_poll_enable() placement
drm/nouveau: Remove duplicate poll_enable() in pmops_runtime_suspend()
drm/nouveau: Fix deadlock with fb_helper with async RPM requests
drm/nouveau: Use pm_runtime_get_noresume() in connector_detect()