Displaying 20 results from an estimated 21 matches for "thread_return".
2012 Jan 22
2
Centso 6.2 bug ?
Hello,
is anyone experiencing this ?
I have a sympa process (bulk.pl) which triggers this bug:
------------[ cut here ]------------
WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Not tainted)
Hardware name: X8DTU-LN4+
Modules linked in: cpufreq_ondemand acpi_cpufreq freq_table mperf
ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack
ip6table_filter ip6_tables ipv6 microcode serio_raw i2c_i801 i2c_core
iTCO_wdt iTCO_vendor_support igb ioatdm...
2012 Jan 13
1
what to do about [abrt] full crash report kernel taint?
...stuff from the log and one of the emails:
log:********************************
# grep -e '12 10:16' -e '10 14:46' -e '9 15:44' /var/log/messages
Jan 9 15:44:11 name kernel: ------------[ cut here ]------------
Jan 9 15:44:11 name kernel: WARNING: at kernel/sched.c:5914
thread_return+0x232/0x79d() (Not tainted)
Jan 9 15:44:11 name kernel: Hardware name: Precision WorkStation 490
Jan 9 15:44:11 name kernel: Modules linked in: fuse nfs lockd fscache
nfs_acl auth_rpcgss autofs4 sunrpc p4_clockmod freq_table
speedstep_lib ipv6 ppdev parport_pc parport sg microcode dcdbas
serio_ra...
2012 Feb 06
1
Unknown KERNEL Warning in boot messages
...1/0x44
[<ffffffff81c1fc2e>] ? start_kernel+0xdc/0x430
[<ffffffff81c1f33a>] ? x86_64_start_reservations+0x125/0x129
[<ffffffff81c1f438>] ? x86_64_start_kernel+0xfa/0x109
---[ end trace a7919e7f17c0a725 ]---
------------[ cut here ]------------
WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Tainted: G
W
---------------- )
Hardware name: empty
Modules linked in: ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state
nf_conntrack ip6table_filter ip6_tables ipv6 raid1 e1000e serio_raw i2c_i801
i2c_core sg iTCO_wdt iTCO_vendor_support ioatdma dca i5000_edac edac_...
2011 Mar 20
2
task md1_resync:9770 blocked for more than 120 seconds and OOM errors
...hread+0x0/0xc4
kernel: [<ffffffff8021af66>] md_do_sync+0x1d8/0x833
kernel: [<ffffffff8008ca47>] enqueue_task+0x41/0x56
kernel: [<ffffffff8008cab2>] __activate_task+0x56/0x6d
kernel: [<ffffffff8008c897>] dequeue_task+0x18/0x37
kernel: [<ffffffff80062ff8>] thread_return+0x62/0xfe
kernel: [<ffffffff800a0b5f>] autoremove_wake_function+0x0/0x2e
kernel: [<ffffffff800a0947>] keventd_create_kthread+0x0/0xc4
kernel: [<ffffffff8021b93a>] md_thread+0xf8/0x10e
kernel: [<ffffffff800a0947>] keventd_create_kthread+0x0/0xc4
kernel: [<ff...
2014 Nov 03
1
dmesg error
...51>] file_read_actor+0x0/0x159
[<ffffffff8000c6eb>] __generic_file_aio_read+0x14c/0x198
[<ffffffff80016eb7>] generic_file_aio_read+0x36/0x3b
[<ffffffff8000cf39>] do_sync_read+0xc7/0x104
[<ffffffff800a34a7>] autoremove_wake_function+0x0/0x2e
[<ffffffff80063002>] thread_return+0x62/0xfe
[<ffffffff8000b721>] vfs_read+0xcb/0x171
[<ffffffff80011d15>] sys_read+0x45/0x6e
[<ffffffff8005d28d>] tracesys+0xd5/0xe0
What should be done?
jerry
2014 Apr 02
2
possible kernel bug?
...0000000000000402 00000000000001fc ffff88038d4ca000 00000000000001fc
<4><d> ffff880028216840 00000000000001fc 0000000100000000 000000000000002a
<4><d> 0000000000000002 ffff880028316840 0000000000000000 ffff88002820fbe0
<4>Call Trace:
<4> [<ffffffff81528350>] thread_return+0x46e/0x76e
<4> [<ffffffffa0292db5>] kvm_vcpu_block+0x75/0xc0 [kvm]
<4> [<ffffffff8109b290>] ? autoremove_wake_function+0x0/0x40
<4> [<ffffffffa02a73d7>] kvm_arch_vcpu_ioctl_run+0x627/0x10b0 [kvm]
<4> [<ffffffffa028eb04>] kvm_vcpu_ioctl+0x434/0x580 [k...
2010 Apr 05
2
Kernel BUG
An HTML attachment was scrubbed...
URL: <http://lists.centos.org/pipermail/centos/attachments/20100405/b0bb4b91/attachment-0003.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: GilbertoNunesFerreira_html_185aa38c.jpg
Type: image/jpeg
Size: 16538 bytes
Desc: not available
URL:
2008 Aug 09
4
Upgrade 3.0.3 to 3.2.1
Hi,
i''m prepering to upgrade my servers from xen 3.0.3 32-bit to 3.2.1 64-bit.
The old system:
Debian 4.0 i386 with included hypervisor 3.0.3 (pae) and dom0 kernel.
The new systen:
Debian lenny amd64 with the included hypervisor 3.2.1 and dom0 kernel from
Debian 4.0 amd64.
My domUs have a self compiled kernel out of the dom0 kernel of the old system
(mainly the dom0 kernel but
2011 Sep 01
3
DOM0 Hang on a large box....
...percall_page+3aa pop %r11
@ ffffffff802405eb: 0:xen_spin_wait+19b test %eax, %eax
@ ffffffff8035969b: 0:_spin_lock+10b test %al, %al
@ ffffffff800342f5: 0:double_lock_balance+65 mov %rbx, %rdi
@ ffffffff80356fc0: 0:thread_return+37e mov 0x880(%r12), %edi
static int _double_lock_balance(struct rq *this_rq, struct rq *busiest)
__releases(this_rq->lock)
__acquires(busiest->lock)
__acquires(this_rq->lock)
{
int ret = 0;
if (unlikely(!spin_trylock(&busiest->...
2014 Jan 30
2
CentOS 6.5: NFS server crashes with list_add corruption errors
...an 30 09:46:13 qb-storage kernel: Call Trace:
Jan 30 09:46:13 qb-storage kernel: [<ffffffff81071e27>] ?
warn_slowpath_common+0x87/0xc0
Jan 30 09:46:13 qb-storage kernel: [<ffffffff81071f16>] ?
warn_slowpath_fmt+0x46/0x50
Jan 30 09:46:13 qb-storage kernel: [<ffffffff81527920>] ?
thread_return+0x4e/0x76e
Jan 30 09:46:13 qb-storage kernel: [<ffffffff812944ed>] ?
__list_add+0x6d/0xa0
Jan 30 09:46:13 qb-storage kernel: [<ffffffffa05bd60a>] ?
laundromat_main+0x23a/0x3f0 [nfsd]
Jan 30 09:46:13 qb-storage kernel: [<ffffffffa05bd3d0>] ?
laundromat_main+0x0/0x3f0 [nfsd]
Jan...
2010 Apr 05
0
Kernel BUG
...09:03:07 zebra kernel: [<ffffffff802883c0>] dequeue_task+0x18/0x37
Apr 5 09:03:07 zebra kernel: [<ffffffff80288407>] deactivate_task+0x28/0x5f
Apr 5 09:03:07 zebra kernel: [<ffffffff8026ef47>] monotonic_clock+0x35/0x7b
Apr 5 09:03:07 zebra kernel: [<ffffffff80262dd3>] thread_return+0x6c/0x113
Apr 5 09:03:07 zebra kernel: [<ffffffff80340af5>] kobject_cleanup+0x39/0x7e
Apr 5 09:03:07 zebra kernel: [<ffffffff88719d2c>]
:blkbk:blkif_schedule+0x36e/0x456
Apr 5 09:03:07 zebra kernel: [<ffffffff887199be>]
:blkbk:blkif_schedule+0x0/0x456
Apr 5 09:03:07 zebra...
2008 Jan 28
2
dovecot servers hanging with fuse/glusterfs errors
...f8020dd40 ffff88001f04ddb8
ffffffff80225ca3 ffff88001de23500 ffffffff803ef023 ffff88001f04de98
Call Trace:
[<ffffffff88021056>] :fuse:fuse_dev_readv+0x385/0x435
[<ffffffff8020dd40>] monotonic_clock+0x35/0x7d
[<ffffffff80225ca3>] deactivate_task+0x1d/0x28
[<ffffffff803ef023>] thread_return+0x0/0x120
[<ffffffff802801d3>] do_readv_writev+0x271/0x294
[<ffffffff802274c7>] default_wake_function+0x0/0xe
[<ffffffff803f0976>] __down_read+0x12/0xec
[<ffffffff88021120>] :fuse:fuse_dev_read+0x1a/0x1f
[<ffffffff802804bc>] vfs_read+0xcb/0x171
[<ffffffff802274c7>...
2010 Oct 14
1
KVM instance keep crashing
...9 localhost kernel: [<ffffffff8005dde9>] error_exit+0x0/0x84
Oct 14 16:25:09 localhost kernel: [<ffffffff802264ac>]
sys_sendto+0x11c/0x14f
Oct 14 16:25:10 localhost kernel: [<ffffffff8006b011>]
__switch_to+0xfe/0x22f
Oct 14 16:25:10 localhost kernel: [<ffffffff80062ff8>]
thread_return+0x62/0xfe
Oct 14 16:25:10 localhost kernel: [<ffffffff80043b84>]
sys_rt_sigreturn+0x323/0x356
Oct 14 16:25:10 localhost kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0
Oct 14 16:25:10 localhost kernel:
Afterwhich the instance become very sluggish and unresponsive. Please advise
what...
2012 May 03
0
Strange situation with openssl and kernel
...ff8100a9323040
May 2 22:48:20 vmail kernel: 00001b85cd33ccf5 00000000000354bc
ffff8100a6583988 000000008006e665
May 2 22:48:20 vmail kernel: Call Trace:
May 2 22:48:20 vmail kernel: [<ffffffff8006ecd9>]
do_gettimeofday+0x40/0x90
May 2 22:48:20 vmail kernel: [<ffffffff80062ffd>] thread_return+0x5d/0xfe
May 2 22:48:20 vmail kernel: [<ffffffff8005a412>] getnstimeofday+0x10/0x29
May 2 22:48:20 vmail kernel: [<ffffffff8001558e>] sync_buffer+0x0/0x3f
May 2 22:48:20 vmail kernel: [<ffffffff800637de>] io_schedule+0x3f/0x67
May 2 22:48:20 vmail kernel: [<ffffffff800...
2012 Mar 16
1
NFS Hanging Under Heavy Load
...ffff814f4c85>] ? do_IRQ+0x75/0xf0
Mar 16 07:01:21 *****store01 kernel: [<ffffffff8100ba53>] ?
ret_from_intr+0x0/0x11
Mar 16 07:01:21 *****store01 kernel: <EOI> [<ffffffff8105673f>] ?
finish_task_switch+0x4f/0xe0
Mar 16 07:01:21 *****store01 kernel: [<ffffffff814ec9ce>] ?
thread_return+0x4e/0x760
Mar 16 07:01:21 *****store01 kernel: [<ffffffff81123741>] ?
__alloc_pages_nodemask+0x111/0x940
Mar 16 07:01:21 *****store01 kernel: [<ffffffff814ed7b2>] ?
schedule_timeout+0x192/0x2e0
Mar 16 07:01:21 *****store01 kernel: [<ffffffff8107c0a0>] ?
process_timeout+0x0/0x10
M...
2012 Jun 16
5
Not real confident in 3.3
I do not mean to be argumentative, but I have to admit a little
frustration with Gluster. I know an enormous emount of effort has gone
into this product, and I just can't believe that with all the effort
behind it and so many people using it, it could be so fragile.
So here goes. Perhaps someone here can point to the error of my ways. I
really want this to work because it would be ideal
2015 Feb 16
2
Intermittent problem, likely disk IO related - mptscsih: ioc0: attempting task abort!
...Feb 16 06:07:01 192.168.13.230
Feb 16 06:07:01 Pid: 1950, comm: qemu-kvm Not tainted 2.6.32-504.8.1.el6.centos.plus.x86_64 #1
Feb 16 06:07:01 Call Trace:
Feb 16 06:07:01 <NMI>
Feb 16 06:07:01 [<ffffffff81060bb6>] ? __schedule_bug+0x66/0x70
Feb 16 06:07:01 [<ffffffff8153193c>] ? thread_return+0x6ac/0x7d0
Feb 16 06:07:01 [<ffffffffa002e35d>] ? write_msg+0xfd/0x110 [netconsole]
Feb 16 06:07:01 [<ffffffffa00b2d0e>] ? drm_crtc_helper_set_config+0x1be/0xa60 [drm_kms_helper]
Feb 16 06:07:01 [<ffffffff8106c85a>] ? __cond_resched+0x2a/0x40
Feb 16 06:07:01 [<ffffffff8153...
2013 Mar 25
1
A problem when mount glusterfs via NFS
HI:
I run glusterfs with four nodes, 2x2 Distributed-Replicate.
I mounted it via fuse and did some test, it was ok.
However when I mounted it via nfs, a problem was found:
When I copied 200G files to the glusterfs, the glusterfs process in the server node(mounted by client) was killed because of OOM,
and all terminals of the client were hung. Trying to test for many times, I got the
2015 Feb 08
2
Intermittent problem, likely disk IO related - mptscsih: ioc0: attempting task abort!
NOTE: this is happening on Centos 6 x86_64, 2.6.32-504.3.3.el6.x86_64 not Centos 5
Dell PowerEdge 2970, Seagate SATA drive, non-raid.
I have this server which has been dying randomly, with no logs.
I had a tail -f over ssh for a week, when this just happened.
Feb 8 00:10:21 thirteen-230 kernel: mptscsih: ioc0: attempting task abort! (sc=ffff880057a0a080)
Feb 8 00:10:21 thirteen-230 kernel:
2009 Jan 10
51
Xen with dom0 pvops on ultra-recent "git tip" kernel on x86_64
Hi everyone,
I am very excited to see that dom0 pvops is finally coming close to
working, so I wanted to give it a try.
>From the description it was not clear to me which kernel to chose as
base for the patches.hg, so I took the latest (that was ~ 2 weeks ago)
kernel on git.kernel.org I could find (post-2.6.28 git tip at that
point).
I managed to more or less apply all of the patches in the