Displaying 20 results from an estimated 317 matches for "0x160".
Did you mean:
0x10
2009 Apr 01
4
ZFS Locking Up periodically
I''ve recently re-installed an X4500 running Nevada b109 and have been
experiencing ZFS lock ups regularly (perhaps once every 2-3 days).
The machine is a backup server and receives hourly ZFS snapshots from
another thumper - as such, the amount of zfs activity tends to be
reasonably high. After about 48 - 72 hours, the file system seems to lock
up and I''m unable to do anything
2023 Dec 09
2
BUG: KFENCE: memory corruption in free_async+0x1d8/0x1e0
...memory at 0x0000000025448a9e [ ! ! ! . . . . . . .
. . . . . . ] (in kfence-#183):
[21963.079711] free_async+0x1d8/0x1e0
[21963.079728] usbdev_ioctl+0x138/0x1c40
[21963.079744] __arm64_sys_ioctl+0xd0/0x130
[21963.079769] invoke_syscall+0x7c/0x130
[21963.079793] el0_svc_common.constprop.0+0x6c/0x160
[21963.079815] do_el0_svc+0x38/0x120
[21963.079835] el0_svc+0x34/0xc0
[21963.079856] el0t_64_sync_handler+0x11c/0x150
[21963.079876] el0t_64_sync+0x198/0x19c
[21963.079892]
[21963.079899] kfence-#183: 0x0000000070088b17-0x00000000bed184b6, size=5,
cache=kmalloc-128
[21963.079899]
[21963.079916]...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...t; [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20
> [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs]
> [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0
> [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160
> [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20
> [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21
> [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160
> [10679.527283] INFO: task glusterposixfsy:14941 blocked for m...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...0x2e8/0x340 [xfs]
[ 8280.189305] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20
[ 8280.189333] [<ffffffffc0480b97>] xfs_file_fsync+0x107/0x1e0 [xfs]
[ 8280.189340] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0
[ 8280.189345] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/0x160
[ 8280.189348] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20
[ 8280.189352] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21
[ 8280.189356] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/0x160
[ 8280.189360] INFO: task glusteriotwr2:766 blocked for more than 120
seconds.
[...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...t; [ 8280.189305] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20
> [ 8280.189333] [<ffffffffc0480b97>] xfs_file_fsync+0x107/0x1e0 [xfs]
> [ 8280.189340] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0
> [ 8280.189345] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/0x160
> [ 8280.189348] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20
> [ 8280.189352] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21
> [ 8280.189356] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/0x160
> [ 8280.189360] INFO: task glusteriotwr2:766 blocked for more...
2018 May 30
2
[ovirt-users] Re: Gluster problems, cluster performance issues
...b0>] ? wake_up_state+0x20/0x20
>>> [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs]
>>> [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0
>>> [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/
>>> 0x160
>>> [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20
>>> [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21
>>> [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/
>>> 0x160
>>> [10679.527283]...
2006 Jul 06
12
kernel BUG at net/core/dev.c:1133!
...forward_finish+0x0/0x70
[<c05bbee0>] br_forward_finish+0x0/0x70
[<c04f0f4e>] nf_hook_slow+0x6e/0x120
[<c05bbee0>] br_forward_finish+0x0/0x70
[<c05bc044>] __br_forward+0x74/0x80
[<c05bbee0>] br_forward_finish+0x0/0x70
[<c05bceb1>] br_handle_frame_finish+0xd1/0x160
[<c05bcde0>] br_handle_frame_finish+0x0/0x160
[<c05c0e0b>] br_nf_pre_routing_finish+0xfb/0x480
[<c05bcde0>] br_handle_frame_finish+0x0/0x160
[<c05c0d10>] br_nf_pre_routing_finish+0x0/0x480
[<c054fe13>] ip_nat_in+0x43/0xc0
[<c05c0d10>] br_nf_pre_routing_fini...
2018 Jun 01
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...t;ffffffff960cf1b0>] ? wake_up_state+0x20/0x20
>> [ 8280.189333] [<ffffffffc0480b97>] xfs_file_fsync+0x107/0x1e0 [xfs]
>> [ 8280.189340] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0
>> [ 8280.189345] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/
>> 0x160
>> [ 8280.189348] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20
>> [ 8280.189352] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21
>> [ 8280.189356] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/
>> 0x160
>> [ 8280.189360] INFO: task glusterio...
2018 Jun 02
6
[Bug 106787] New: Thinkpad P52s NVIDIA Quadro P500 gp108 WARNING: CPU: 3 PID: 395 at drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c:86 nvkm_pmu_reset+0x14c/0x160
https://bugs.freedesktop.org/show_bug.cgi?id=106787
Bug ID: 106787
Summary: Thinkpad P52s NVIDIA Quadro P500 gp108 WARNING: CPU: 3
PID: 395 at
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c:86
nvkm_pmu_reset+0x14c/0x160 [nouveau]
Product: xorg
Version: unspecified
Hardware: x86-64 (AMD64)
OS: Linux (All)
Status: NEW
Severity: normal
Priority: medium
Component: Driver/nouveau
Assignee: nouveau at lists.freedesktop.org...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...0
>>>>> [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs]
>>>>> [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0
>>>>> [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/
>>>>> 0x160
>>>>> [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20
>>>>> [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21
>>>>> [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/
>>>>> 0x...
2014 Aug 21
1
Cluster blocked, so as to reboot all nodes to avoid it. Is there any patchs for it? Thanks.
Hi, everyone
And we have the blocked cluster several times, and the log is always, we have to reboot all the node of the cluster to avoid it.
Is there any patch that had fix this bug?
[<ffffffff817539a5>] schedule_timeout+0x1e5/0x250
[<ffffffff81755a77>] wait_for_completion+0xa7/0x160
[<ffffffff8109c9b0>] ? try_to_wake_up+0x2c0/0x2c0
[<ffffffffa0564063>] __ocfs2_cluster_lock.isra.30+0x1f3/0x820 [ocfs2]
As we test with a lot of node in one cluster, may be ten or twenty nodes, the cluster is always blocked, and the log is below,
The kernel version is 3.13.6.
Aug 20...
2014 Aug 21
1
Cluster blocked, so as to reboot all nodes to avoid it. Is there any patchs for it? Thanks.
Hi, everyone
And we have the blocked cluster several times, and the log is always, we have to reboot all the node of the cluster to avoid it.
Is there any patch that had fix this bug?
[<ffffffff817539a5>] schedule_timeout+0x1e5/0x250
[<ffffffff81755a77>] wait_for_completion+0xa7/0x160
[<ffffffff8109c9b0>] ? try_to_wake_up+0x2c0/0x2c0
[<ffffffffa0564063>] __ocfs2_cluster_lock.isra.30+0x1f3/0x820 [ocfs2]
As we test with a lot of node in one cluster, may be ten or twenty nodes, the cluster is always blocked, and the log is below,
The kernel version is 3.13.6.
Aug 20...
2006 Jan 20
7
Wine and Kaleidagraph
Hi.
I am a recently debian user that don't know much about linux, but I work
hard :).
I was boring about MsWindows and I've decided to change, but I have to
work with some Windows programs.
Really I have a problem: I must use kaleidagraph in my work,I've seen
that this program is available in the Programs database, so I've
installed and upgraded my Wine.
After that I have
2015 Mar 30
1
Lockup/panic caused by nouveau_fantog_update recursion
...le_edge_irq+0x6e/0x120
[ 9227.509941] [<ffffffff81016952>] handle_irq+0x22/0x40
[ 9227.509943] [<ffffffff817b453f>] do_IRQ+0x4f/0xf0
[ 9227.509945] [<ffffffff817b23ad>] common_interrupt+0x6d/0x6d
[ 9227.509945] <EOI> [<ffffffff8164cf76>] ? cpuidle_enter_state+0x66/0x160
[ 9227.509948] [<ffffffff8164cf61>] ? cpuidle_enter_state+0x51/0x160
[ 9227.509949] [<ffffffff8164d157>] cpuidle_enter+0x17/0x20
[ 9227.509951] [<ffffffff810b5741>] cpu_startup_entry+0x351/0x400
[ 9227.509953] [<ffffffff8179f037>] rest_init+0x77/0x80
[ 9227.509955] [<...
2018 Aug 05
2
[PATCH net-next 0/6] virtio_net: Add ethtool stat items
...hange_flags+0x469/0x630
[ 46.167152] ? dev_set_allmulti+0x10/0x10
[ 46.167157] ? __lock_acquire+0x9c1/0x4b50
[ 46.167166] ? print_irqtrace_events+0x280/0x280
[ 46.167174] ? kvm_clock_read+0x1f/0x30
[ 46.167191] ? rtnetlink_put_metrics+0x530/0x530
[ 46.167205] dev_change_flags+0x8b/0x160
[ 46.167236] do_setlink+0xa17/0x39f0
[ 46.167248] ? sched_clock_cpu+0x18/0x2b0
[ 46.167267] ? rtnl_dump_ifinfo+0xd20/0xd20
[ 46.167277] ? print_irqtrace_events+0x280/0x280
[ 46.167285] ? kvm_clock_read+0x1f/0x30
[ 46.167316] ? debug_show_all_locks+0x3b0/0x3b0
[ 46.167321] ? kvm...
2018 Aug 05
2
[PATCH net-next 0/6] virtio_net: Add ethtool stat items
...hange_flags+0x469/0x630
[ 46.167152] ? dev_set_allmulti+0x10/0x10
[ 46.167157] ? __lock_acquire+0x9c1/0x4b50
[ 46.167166] ? print_irqtrace_events+0x280/0x280
[ 46.167174] ? kvm_clock_read+0x1f/0x30
[ 46.167191] ? rtnetlink_put_metrics+0x530/0x530
[ 46.167205] dev_change_flags+0x8b/0x160
[ 46.167236] do_setlink+0xa17/0x39f0
[ 46.167248] ? sched_clock_cpu+0x18/0x2b0
[ 46.167267] ? rtnl_dump_ifinfo+0xd20/0xd20
[ 46.167277] ? print_irqtrace_events+0x280/0x280
[ 46.167285] ? kvm_clock_read+0x1f/0x30
[ 46.167316] ? debug_show_all_locks+0x3b0/0x3b0
[ 46.167321] ? kvm...
2005 Jun 02
0
RE: Badness in softirq.c / no modules loaded / relatedtonetwork interface
...Jun 2 12:13:16 zen kernel: Badness in local_bh_enable at kernel/softirq.c:140
Jun 2 12:13:16 zen kernel: [local_bh_enable+130/144] local_bh_enable+0x82/0x90
Jun 2 12:13:16 zen kernel: [skb_checksum+317/704] skb_checksum+0x13d/0x2c0
Jun 2 12:13:16 zen kernel: [udp_poll+154/352] udp_poll+0x9a/0x160
Jun 2 12:13:16 zen kernel: [sock_poll+41/64] sock_poll+0x29/0x40
Jun 2 12:13:16 zen kernel: [do_pollfd+149/160] do_pollfd+0x95/0xa0
Jun 2 12:13:16 zen kernel: [do_poll+106/208] do_poll+0x6a/0xd0
Jun 2 12:13:16 zen kernel: [sys_poll+353/576] sys_poll+0x161/0x240
Jun 2 12:13:16 zen kernel:...
2014 Apr 02
2
random crashes
...08>] synchronize_sched+0x58/0x60
Apr 1 21:22:58 sg1 kernel: [<ffffffff81097090>] ? wakeme_after_rcu+0x0/0x20
Apr 1 21:22:58 sg1 kernel: [<ffffffff812229dc>]
install_session_keyring_to_cred+0x6c/0xd0
Apr 1 21:22:58 sg1 kernel: [<ffffffff81222b73>]
join_session_keyring+0x133/0x160
Apr 1 21:22:58 sg1 kernel: [<ffffffff810e2057>] ?
audit_syscall_entry+0x1d7/0x200
Apr 1 21:22:58 sg1 kernel: [<ffffffff81221778>]
keyctl_join_session_keyring+0x38/0x70
Apr 1 21:22:58 sg1 kernel: [<ffffffff812223a0>] sys_keyctl+0x170/0x190
Apr 1 21:22:58 sg1 kernel: [<ffff...
2013 Sep 10
1
Errors on NFS server
...[<ffffffffa03ecddc>] ? decode_filename+0x1c/0x70 [nfsd]
[<ffffffffa03dd43e>] ? nfsd_dispatch+0xfe/0x240 [nfsd]
[<ffffffffa033b614>] ? svc_process_common+0x344/0x640 [sunrpc]
[<ffffffff81063330>] ? default_wake_function+0x0/0x20
[<ffffffffa033bc50>] ? svc_process+0x110/0x160 [sunrpc]
[<ffffffffa03ddb62>] ? nfsd+0xc2/0x160 [nfsd]
[<ffffffffa03ddaa0>] ? nfsd+0x0/0x160 [nfsd]
[<ffffffff81096956>] ? kthread+0x96/0xa0
[<ffffffff8100c0ca>] ? child_rip+0xa/0x20
[<ffffffff810968c0>] ? kthread+0x0/0xa0
[<ffffffff8100c0c0>] ? child_rip+0x0/0x2...
2017 Aug 07
1
gluster stuck when trying to list a successful mount
...2017] [<ffffffff8120f23f>] ? getname_flags+0x4f/0x1a0
[Mon Aug 7 17:09:32 2017] [<ffffffff8120c94b>] filename_lookup+0x2b/0xc0
[Mon Aug 7 17:09:32 2017] [<ffffffff81210367>] user_path_at_empty+0x67/0xc0
[Mon Aug 7 17:09:32 2017] [<ffffffff810f5570>] ? futex_wake+0x80/0x160
[Mon Aug 7 17:09:32 2017] [<ffffffff812103d1>] user_path_at+0x11/0x20
[Mon Aug 7 17:09:32 2017] [<ffffffff81203843>] vfs_fstatat+0x63/0xc0
[Mon Aug 7 17:09:32 2017] [<ffffffff81203e11>] SYSC_newlstat+0x31/0x60
[Mon Aug 7 17:09:32 2017] [<ffffffff810f85c0>] ? SyS_fute...