Displaying 20 results from an estimated 711 matches for "0xe".
Did you mean:
0x
2017 Sep 08
2
GlusterFS as virtual machine storage
...the last 1 seconds, disconnecting.
[2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
(--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
(--
> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fbcad8d2170] (-->
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fbcad8d2c20] )))))
0-gv_openstack_1-client-1: forced unwinding frame type(GlusterFS 3.3)
op(FINODELK(30)) called at 2017-09-08 09:31:45.626882 (xid...
2017 Sep 08
0
GlusterFS as virtual machine storage
...onnecting.
> [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind]
> (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
> (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
> (--
>> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (-->
>> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fbcad8d2170] (-->
>> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fbcad8d2c20] )))))
> 0-gv_openstack_1-client-1: forced unwinding frame type(GlusterFS 3.3)
> op(FINODELK(30)) called at 201...
2017 Sep 08
1
GlusterFS as virtual machine storage
...onnecting.
> [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind]
> (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
> (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
> (--
>> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (-->
>> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fbcad8d2170]
(-->
>> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fbcad8d2c20] )))))
> 0-gv_openstack_1-client-1: forced unwinding frame type(GlusterFS 3.3)
> op(FINODELK(30)) called at 201...
2017 Sep 08
1
GlusterFS as virtual machine storage
...nnecting.
> [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind]
> (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
> (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
> (--
> > /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fbcad8d2170]
> (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fbcad8d2c20] )))))
> 0-gv_openstack_1-client-1: forced unwinding frame type(GlusterFS 3.3)
> op(FINODELK(30)) called at 2017-09-08...
2017 Sep 08
0
GlusterFS as virtual machine storage
...the last 1 seconds, disconnecting.
[2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
(--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
(--
> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fbcad8d2170] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fbcad8d2c20] )))))
0-gv_openstack_1-client-1: forced unwinding frame type(GlusterFS 3.3)
op(FINODELK(30)) called at 2017-09-08 09:31:45.626882 (xid...
2017 Sep 08
3
GlusterFS as virtual machine storage
...onnecting.
> [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind]
> (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
> (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
> (--
>> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (-->
>> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fbcad8d2170]
(-->
>> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fbcad8d2c20] )))))
> 0-gv_openstack_1-client-1: forced unwinding frame type(GlusterFS 3.3)
> op(FINODELK(30)) called at 201...
2017 Sep 08
3
GlusterFS as virtual machine storage
Oh, you really don't want to go below 30s, I was told.
I'm using 30 seconds for the timeout, and indeed when a node goes down
the VM freez for 30 seconds, but I've never seen them go read only for
that.
I _only_ use virtio though, maybe it's that. What are you using ?
On Fri, Sep 08, 2017 at 11:41:13AM +0200, Pavel Szalbot wrote:
> Back to replica 3 w/o arbiter. Two fio jobs
2019 Aug 06
2
Xorg indefinitely hangs in kernelspace
...7] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[354073.738332] Xorg D 0 920 854 0x00404004
[354073.738334] Call Trace:
[354073.738340] __schedule+0x2ba/0x650
[354073.738342] schedule+0x2d/0x90
[354073.738343] schedule_preempt_disabled+0xe/0x10
[354073.738345] __ww_mutex_lock.isra.11+0x3e0/0x750
[354073.738346] __ww_mutex_lock_slowpath+0x16/0x20
[354073.738347] ww_mutex_lock+0x34/0x50
[354073.738352] ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm]
[354073.738356] qxl_release_reserve_list+0x67/0x150 [qxl]
[354073.738358] ? qxl_bo_pin+0...
2019 Aug 06
2
Xorg indefinitely hangs in kernelspace
...7] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[354073.738332] Xorg D 0 920 854 0x00404004
[354073.738334] Call Trace:
[354073.738340] __schedule+0x2ba/0x650
[354073.738342] schedule+0x2d/0x90
[354073.738343] schedule_preempt_disabled+0xe/0x10
[354073.738345] __ww_mutex_lock.isra.11+0x3e0/0x750
[354073.738346] __ww_mutex_lock_slowpath+0x16/0x20
[354073.738347] ww_mutex_lock+0x34/0x50
[354073.738352] ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm]
[354073.738356] qxl_release_reserve_list+0x67/0x150 [qxl]
[354073.738358] ? qxl_bo_pin+0...
2017 Mar 02
2
[Bug 100035] New: nouveau runtime pm causes soft lockups and hangs during boot
....593276] nvkm_subdev_preinit+0x34/0x120 [nouveau]
[ 56.593291] nvkm_device_init+0x60/0x270 [nouveau]
[ 56.593305] nvkm_udevice_init+0x48/0x60 [nouveau]
[ 56.593313] nvkm_object_init+0x40/0x190 [nouveau]
[ 56.593320] nvkm_object_init+0x80/0x190 [nouveau]
[ 56.593328] nvkm_client_init+0xe/0x10 [nouveau]
[ 56.593343] nvkm_client_resume+0xe/0x10 [nouveau]
[ 56.593350] nvif_client_resume+0x14/0x20 [nouveau]
[ 56.593365] nouveau_do_resume+0x4d/0x130 [nouveau]
[ 56.593379] nouveau_pmops_runtime_resume+0x72/0x150 [nouveau]
[ 56.593381] pci_pm_runtime_resume+0x7b/0xa0
[ 56...
2013 May 07
2
[PATCH] KVM: Fix kvm_irqfd_init initialization
...h_init() will fail with -EEXIST, then kvm_irqfd_exit() will be
called on the error handling path. This way, the kvm_irqfd system will
not be ready.
This patch fix the following:
BUG: unable to handle kernel NULL pointer dereference at (null)
IP: [<ffffffff81c0721e>] _raw_spin_lock+0xe/0x30
PGD 0
Oops: 0002 [#1] SMP
Modules linked in: vhost_net
CPU 6
Pid: 4257, comm: qemu-system-x86 Not tainted 3.9.0-rc3+ #757 Dell Inc. OptiPlex 790/0V5HMK
RIP: 0010:[<ffffffff81c0721e>] [<ffffffff81c0721e>] _raw_spin_lock+0xe/0x30
RSP: 0018:ffff880221721cc8 EFLAGS: 00010046
RAX: 000...
2013 May 07
2
[PATCH] KVM: Fix kvm_irqfd_init initialization
...h_init() will fail with -EEXIST, then kvm_irqfd_exit() will be
called on the error handling path. This way, the kvm_irqfd system will
not be ready.
This patch fix the following:
BUG: unable to handle kernel NULL pointer dereference at (null)
IP: [<ffffffff81c0721e>] _raw_spin_lock+0xe/0x30
PGD 0
Oops: 0002 [#1] SMP
Modules linked in: vhost_net
CPU 6
Pid: 4257, comm: qemu-system-x86 Not tainted 3.9.0-rc3+ #757 Dell Inc. OptiPlex 790/0V5HMK
RIP: 0010:[<ffffffff81c0721e>] [<ffffffff81c0721e>] _raw_spin_lock+0xe/0x30
RSP: 0018:ffff880221721cc8 EFLAGS: 00010046
RAX: 000...
2014 Oct 13
2
kernel crashes after soft lockups in xen domU
...008.101006] [<ffffffff81006d22>] ? check_events+0x12/0x20
[354008.101011] [<ffffffff81006d0f>] ?
xen_restore_fl_direct_reloc+0x4/0x4
[354008.101017] [<ffffffff81071153>] ? arch_local_irq_restore+0x7/0x8
[354008.101024] [<ffffffff8135049f>] ?
_raw_spin_unlock_irqrestore+0xe/0xf
[354008.101031] [<ffffffff810be895>] ? release_pages+0xf4/0x14d
[354008.101038] [<ffffffff810de78b>] ?
free_pages_and_swap_cache+0x48/0x60
[354008.101045] [<ffffffff810cf527>] ? tlb_flush_mmu+0x37/0x50
[354008.101049] [<ffffffff810cf54c>] ? tlb_finish_mmu+0xc/0x31
[...
2012 Mar 27
0
App. Delisprint hangs on second run
...0008: error 5
warning: could not attach to 0008
Can't attach process 0008: error 5
warning: could not attach to 0008
Can't attach process 0008: error 5
warning: could not attach to 0008
Can't attach process 0008: error 5
warning: could not attach to 0008
0xf778342e __kernel_vsyscall+0xe in [vdso].so: int $0x80
Backtracing for thread 001f in process 000e (C:\windows\system32\services.exe):
Backtrace:
=>0 0xf778342e __kernel_vsyscall+0xe() in [vdso].so (0x00a4e598)
1 0xf75e3bcb __libc_read+0x4a() in libpthread.so.0 (0x00a4e598)
2 0x7bc77758 wait_reply+0x57(cookie=0xa4e728) [...
2018 Jan 05
4
Centos 6 2.6.32-696.18.7.el6.x86_64 does not boot in Xen PV mode
...fff81006b9e>] ? xen_extend_mmu_update+0xde/0x1b0
(early) [<ffffffff81006fcd>] ? xen_set_pmd_hyper+0x9d/0xc0
(early) [<ffffffff81c5e8ac>] ? early_ioremap_init+0x98/0x133
(early) [<ffffffff81c45221>] ? setup_arch+0x40/0xca6
(early) [<ffffffff8107e0ee>] ? vprintk_default+0xe/0x10
(early) [<ffffffff8154b0cd>] ? printk+0x4f/0x52
(early) [<ffffffff81c3fdda>] ? start_kernel+0xdc/0x43b
(early) [<ffffffff81c47eb0>] ? reserve_early+0x30/0x39
(early) [<ffffffff81c3f33a>] ? x86_64_start_reservations+0x125/0x129
(early) [<ffffffff81c4309c>] ? x...
2019 Sep 05
2
Xorg indefinitely hangs in kernelspace
On 05.09.19 10:14, Gerd Hoffmann wrote:
> On Tue, Aug 06, 2019 at 09:00:10PM +0300, Jaak Ristioja wrote:
>> Hello!
>>
>> I'm writing to report a crash in the QXL / DRM code in the Linux kernel.
>> I originally filed the issue on LaunchPad and more details can be found
>> there, although I doubt whether these details are useful.
>
> Any change with kernel
2019 Sep 05
2
Xorg indefinitely hangs in kernelspace
On 05.09.19 10:14, Gerd Hoffmann wrote:
> On Tue, Aug 06, 2019 at 09:00:10PM +0300, Jaak Ristioja wrote:
>> Hello!
>>
>> I'm writing to report a crash in the QXL / DRM code in the Linux kernel.
>> I originally filed the issue on LaunchPad and more details can be found
>> there, although I doubt whether these details are useful.
>
> Any change with kernel
2019 Dec 26
2
nfs causes Centos 7.7 system to hang
...f45d661>] user_path_at+0x11/0x20
[26399.968501]? [<ffffffffaf450343>] vfs_fstatat+0x63/0xc0
[26399.968505]? [<ffffffffaf4506fe>] SYSC_newstat+0x2e/0x60
[26399.968530]? [<ffffffffaf33e3f6>] ? __audit_syscall_exit+0x1e6/0x280
[26399.968534]? [<ffffffffaf450bbe>] SyS_newstat+0xe/0x10
[26399.968544]? [<ffffffffaf98dede>] system_call_fastpath+0x25/0x2a
[27479.967352] INFO: task cp:4656 blocked for more than 120 seconds.
[27479.967414] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[27479.967711] cp????????????? D ffff8936b654800...
2019 Apr 30
2
Xorg hangs in kernelspace with qxl
...9 blocked for more than 120 seconds.
Not tainted 5.0.0-13-generic #14-Ubuntu
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Xorg D 0 879 790 0x00400004
Call Trace:
__schedule+0x2d0/0x840
schedule+0x2c/0x70
schedule_preempt_disabled+0xe/0x10
__ww_mutex_lock.isra.11+0x3e0/0x750
__ww_mutex_lock_slowpath+0x16/0x20
ww_mutex_lock+0x34/0x50
ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm]
qxl_release_reserve_list+0x67/0x150 [qxl]
? qxl_bo_pin+0x11d/0x200 [qxl]
qxl_cursor_atomic_update+0x1b0/0x2e0 [qxl]
drm_atomic_helper_commit_planes+0x...
2019 Apr 30
2
Xorg hangs in kernelspace with qxl
...9 blocked for more than 120 seconds.
Not tainted 5.0.0-13-generic #14-Ubuntu
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Xorg D 0 879 790 0x00400004
Call Trace:
__schedule+0x2d0/0x840
schedule+0x2c/0x70
schedule_preempt_disabled+0xe/0x10
__ww_mutex_lock.isra.11+0x3e0/0x750
__ww_mutex_lock_slowpath+0x16/0x20
ww_mutex_lock+0x34/0x50
ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm]
qxl_release_reserve_list+0x67/0x150 [qxl]
? qxl_bo_pin+0x11d/0x200 [qxl]
qxl_cursor_atomic_update+0x1b0/0x2e0 [qxl]
drm_atomic_helper_commit_planes+0x...