Displaying 20 results from an estimated 26 matches for "wake_up_st".
Did you mean:
wake_up_bit
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...2
0x00000000
[ 8280.188843] Call Trace:
[ 8280.188857] [<ffffffff960a3a2e>] ? try_to_del_timer_sync+0x5e/0x90
[ 8280.188864] [<ffffffff96713f79>] schedule+0x29/0x70
[ 8280.188932] [<ffffffffc049fe36>] _xfs_log_force+0x1c6/0x2c0 [xfs]
[ 8280.188939] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20
[ 8280.188972] [<ffffffffc04abfec>] ? xfsaild+0x16c/0x6f0 [xfs]
[ 8280.189003] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs]
[ 8280.189035] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90
[xfs]
[ 8280.189067] [<ffffffffc04abfec>] xfsaild+0x1...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...188843] Call Trace:
> [ 8280.188857] [<ffffffff960a3a2e>] ? try_to_del_timer_sync+0x5e/0x90
> [ 8280.188864] [<ffffffff96713f79>] schedule+0x29/0x70
> [ 8280.188932] [<ffffffffc049fe36>] _xfs_log_force+0x1c6/0x2c0 [xfs]
> [ 8280.188939] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20
> [ 8280.188972] [<ffffffffc04abfec>] ? xfsaild+0x16c/0x6f0 [xfs]
> [ 8280.189003] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs]
> [ 8280.189035] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90
> [xfs]
> [ 8280.189067] [<ffffffff...
2018 Jun 01
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...ce:
>> [ 8280.188857] [<ffffffff960a3a2e>] ? try_to_del_timer_sync+0x5e/0x90
>> [ 8280.188864] [<ffffffff96713f79>] schedule+0x29/0x70
>> [ 8280.188932] [<ffffffffc049fe36>] _xfs_log_force+0x1c6/0x2c0 [xfs]
>> [ 8280.188939] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20
>> [ 8280.188972] [<ffffffffc04abfec>] ? xfsaild+0x16c/0x6f0 [xfs]
>> [ 8280.189003] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs]
>> [ 8280.189035] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90
>> [xfs]
>> [ 8280.18...
2018 May 30
2
[ovirt-users] Re: Gluster problems, cluster performance issues
...33 1
>>> 0x00000080
>>> [10679.527150] Call Trace:
>>> [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70
>>> [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
>>> [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20
>>> [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs]
>>> [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0
>>> [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/
>>> 0x160
>>> [...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...terclogro D ffff97209832bf40 0 14933 1
> 0x00000080
> [10679.527150] Call Trace:
> [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70
> [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
> [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20
> [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs]
> [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0
> [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160
> [10679.527271] [<ffffffffb944f3d0>] SyS_...
2005 Jan 14
1
xen-unstable dom0/1 smp schedule while atomic
...[vfs_read+210/304] vfs_read+0xd2/0x130
[fget_light+130/144] fget_light+0x82/0x90
[sys_read+126/128] sys_read+0x7e/0x80
[do_notify_resume+55/60] do_notify_resume+0x37/0x3c
[work_notifysig+19/24] work_notifysig+0x13/0x18
scheduling while atomic
[schedule+1682/1696] schedule+0x692/0x6a0
[wake_up_state+24/32] wake_up_state+0x18/0x20
[wait_for_completion+148/224] wait_for_completion+0x94/0xe0
[default_wake_function+0/32] default_wake_function+0x0/0x20
[force_sig_specific+99/144] force_sig_specific+0x63/0x90
[default_wake_function+0/32] default_wake_function+0x0/0x20
[zap_threads+92/16...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...; [10679.527150] Call Trace:
>>>>> [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70
>>>>> [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340
>>>>> [xfs]
>>>>> [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20
>>>>> [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs]
>>>>> [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0
>>>>> [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/
>>&...
2014 Oct 20
2
INFO: task echo:622 blocked for more than 120 seconds. - 3.18.0-0.rc0.git
..._held_locks+0x7c/0xb0
[ 240.235645] [<ffffffff81861da0>] ? _raw_spin_unlock_irq+0x30/0x50
[ 240.236198] [<ffffffff81107a4d>] ? trace_hardirqs_on_caller+0x15d/0x200
[ 240.236729] [<ffffffff8185d52c>] wait_for_completion+0x10c/0x150
[ 240.237290] [<ffffffff810e51f0>] ? wake_up_state+0x20/0x20
[ 240.237842] [<ffffffff8112a559>] _rcu_barrier+0x159/0x200
[ 240.238375] [<ffffffff8112a655>] rcu_barrier+0x15/0x20
[ 240.238913] [<ffffffff8171813f>] netdev_run_todo+0x6f/0x310
[ 240.239449] [<ffffffff817251ae>] rtnl_unlock+0xe/0x10
[ 240.239999] [&l...
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
...fffb43eb315>] blk_mq_run_hw_queue+0x95/0xb0
>>> [<ffffffffb43ec804>] blk_mq_flush_plug_list+0x129/0x140
>>> [<ffffffffb43e33d8>] blk_finish_plug+0x18/0x50
>>> [<ffffffffb45e3bea>] dmcrypt_write+0x1da/0x1f0
>>> [<ffffffffb4108c90>] ? wake_up_state+0x20/0x20
>>> [<ffffffffb45e3a10>] ? crypt_iv_lmk_dtr+0x60/0x60
>>> [<ffffffffb40fb789>] kthread_create_on_node+0x180/0x180
>>> [<ffffffffb4705e92>] ret_from_fork+0x42/0x70
>>> [<ffffffffb40fb6c0>] ? kthread_create_on_node+0x180/0x1...
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
...fffb43eb315>] blk_mq_run_hw_queue+0x95/0xb0
>>> [<ffffffffb43ec804>] blk_mq_flush_plug_list+0x129/0x140
>>> [<ffffffffb43e33d8>] blk_finish_plug+0x18/0x50
>>> [<ffffffffb45e3bea>] dmcrypt_write+0x1da/0x1f0
>>> [<ffffffffb4108c90>] ? wake_up_state+0x20/0x20
>>> [<ffffffffb45e3a10>] ? crypt_iv_lmk_dtr+0x60/0x60
>>> [<ffffffffb40fb789>] kthread_create_on_node+0x180/0x180
>>> [<ffffffffb4705e92>] ret_from_fork+0x42/0x70
>>> [<ffffffffb40fb6c0>] ? kthread_create_on_node+0x180/0x1...
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
...43eb315>] blk_mq_run_hw_queue+0x95/0xb0
>>> [<ffffffffb43ec804>] blk_mq_flush_plug_list+0x129/0x140
>>> [<ffffffffb43e33d8>] blk_finish_plug+0x18/0x50
>>> [<ffffffffb45e3bea>] dmcrypt_write+0x1da/0x1f0
>>> [<ffffffffb4108c90>] ? wake_up_state+0x20/0x20
>>> [<ffffffffb45e3a10>] ? crypt_iv_lmk_dtr+0x60/0x60
>>> [<ffffffffb40fb789>] kthread_create_on_node+0x180/0x180
>>> [<ffffffffb4705e92>] ret_from_fork+0x42/0x70
>>> [<ffffffffb40fb6c0>] ? kthread_create_on_node+0x180...
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
...43eb315>] blk_mq_run_hw_queue+0x95/0xb0
>>> [<ffffffffb43ec804>] blk_mq_flush_plug_list+0x129/0x140
>>> [<ffffffffb43e33d8>] blk_finish_plug+0x18/0x50
>>> [<ffffffffb45e3bea>] dmcrypt_write+0x1da/0x1f0
>>> [<ffffffffb4108c90>] ? wake_up_state+0x20/0x20
>>> [<ffffffffb45e3a10>] ? crypt_iv_lmk_dtr+0x60/0x60
>>> [<ffffffffb40fb789>] kthread_create_on_node+0x180/0x180
>>> [<ffffffffb4705e92>] ret_from_fork+0x42/0x70
>>> [<ffffffffb40fb6c0>] ? kthread_create_on_node+0x180...
2017 Oct 22
0
Areca RAID controller on latest CentOS 7 (1708 i.e. RHEL 7.4) kernel 3.10.0-693.2.2.el7.x86_64
...fff88103afe4528 ffff88103eaa2000
Oct 19 23:06:57 radon kernel: Call Trace:
Oct 19 23:06:57 radon kernel: [<ffffffff816a94e9>] schedule+0x29/0x70
Oct 19 23:06:57 radon kernel: [<ffffffffc04d1d16>] _xfs_log_force+0x1c6/0x2c0 [xfs]
Oct 19 23:06:57 radon kernel: [<ffffffff810c4810>] ? wake_up_state+0x20/0x20
Oct 19 23:06:57 radon kernel: [<ffffffffc04ddb9c>] ? xfsaild+0x16c/0x6f0 [xfs]
Oct 19 23:06:57 radon kernel: [<ffffffffc04d1e3c>] xfs_log_force+0x2c/0x70 [xfs]
Oct 19 23:06:57 radon kernel: [<ffffffffc04dda30>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
Oct 19 23:06...
2015 Oct 01
0
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
...blk_mq_run_hw_queue+0x95/0xb0
>>>> [<ffffffffb43ec804>] blk_mq_flush_plug_list+0x129/0x140
>>>> [<ffffffffb43e33d8>] blk_finish_plug+0x18/0x50
>>>> [<ffffffffb45e3bea>] dmcrypt_write+0x1da/0x1f0
>>>> [<ffffffffb4108c90>] ? wake_up_state+0x20/0x20
>>>> [<ffffffffb45e3a10>] ? crypt_iv_lmk_dtr+0x60/0x60
>>>> [<ffffffffb40fb789>] kthread_create_on_node+0x180/0x180
>>>> [<ffffffffb4705e92>] ret_from_fork+0x42/0x70
>>>> [<ffffffffb40fb6c0>] ? kthread_create_o...
2018 Oct 26
0
systemd automount of cifs share hangs
...[<ffffffff85ab4e00>] ?
autofs4_wait+0x420/0x910
Oct 26 09:11:45 saruman kernel: [<ffffffff859faf82>] ?
kmem_cache_alloc+0x1c2/0x1f0
Oct 26 09:11:45 saruman kernel: [<ffffffff85f192ed>]
wait_for_completion+0xfd/0x140
Oct 26 09:11:45 saruman kernel: [<ffffffff858d2010>] ?
wake_up_state+0x20/0x20
Oct 26 09:11:45 saruman kernel: [<ffffffff85ab603b>]
autofs4_expire_wait+0xab/0x160
Oct 26 09:11:45 saruman kernel: [<ffffffff85ab2fc0>]
do_expire_wait+0x1e0/0x210
Oct 26 09:11:45 saruman kernel: [<ffffffff85ab31fe>]
autofs4_d_manage+0x7e/0x1d0
Oct 26 09:11:45 saru...
2017 Sep 28
2
mounting an nfs4 file system as v4.0 in CentOS 7.4?
CentOS 7.4 client mounting a CentOS 7.4 server filesystem over nfs4.
nfs seems to be much slower since the upgrade to 7.4, so I thought it
might be nice to mount the directory as v4.0 rather than the new default
of v4.1 to see if it makes a difference.
The release notes state, without an example:
"You can retain the original behavior by specifying 0 as the minor version"
nfs(5)
2014 Nov 10
0
kernel BUG at drivers/block/virtio_blk.c:172
...lk_mq_flush_plug_list+0x13b/0x160
[ 3.673439] [<ffffffff812d2391>] blk_flush_plug_list+0xc1/0x220
[ 3.673439] [<ffffffff812d28a8>] blk_finish_plug+0x18/0x50
[ 3.673439] [<ffffffffa01ce487>] _xfs_buf_ioapply+0x327/0x430 [xfs]
[ 3.673439] [<ffffffff8109ae20>] ? wake_up_state+0x20/0x20
[ 3.673439] [<ffffffffa01d0424>] ? xfs_bwrite+0x24/0x60 [xfs]
[ 3.673439] [<ffffffffa01cffb1>] xfs_buf_submit_wait+0x61/0x1d0 [xfs]
[ 3.673439] [<ffffffffa01d0424>] xfs_bwrite+0x24/0x60 [xfs]
[ 3.673439] [<ffffffffa01f5dc7>] xlog_bwrite+0x87/0x11...
2018 Oct 19
2
systemd automount of cifs share hangs
>
> But if I start the automount unit and ls the mount point, the shell hangs
> and eventually, a long time later (I haven't timed it, maybe an hour), I
> eventually get a prompt again. Control-C won't interrupt it. I can still
> ssh in and get another session so it's just the process that's accessing
> the mount point that hangs.
>
I don't have a
2015 Oct 01
4
kernel BUG at drivers/block/virtio_blk.c:172!
...eue+0x1d0/0x370
> [<ffffffffb43eb315>] blk_mq_run_hw_queue+0x95/0xb0
> [<ffffffffb43ec804>] blk_mq_flush_plug_list+0x129/0x140
> [<ffffffffb43e33d8>] blk_finish_plug+0x18/0x50
> [<ffffffffb45e3bea>] dmcrypt_write+0x1da/0x1f0
> [<ffffffffb4108c90>] ? wake_up_state+0x20/0x20
> [<ffffffffb45e3a10>] ? crypt_iv_lmk_dtr+0x60/0x60
> [<ffffffffb40fb789>] kthread_create_on_node+0x180/0x180
> [<ffffffffb4705e92>] ret_from_fork+0x42/0x70
> [<ffffffffb40fb6c0>] ? kthread_create_on_node+0x180/0x180
> Code: 00 0000 41 c7 85 7...
2015 Oct 01
4
kernel BUG at drivers/block/virtio_blk.c:172!
...eue+0x1d0/0x370
> [<ffffffffb43eb315>] blk_mq_run_hw_queue+0x95/0xb0
> [<ffffffffb43ec804>] blk_mq_flush_plug_list+0x129/0x140
> [<ffffffffb43e33d8>] blk_finish_plug+0x18/0x50
> [<ffffffffb45e3bea>] dmcrypt_write+0x1da/0x1f0
> [<ffffffffb4108c90>] ? wake_up_state+0x20/0x20
> [<ffffffffb45e3a10>] ? crypt_iv_lmk_dtr+0x60/0x60
> [<ffffffffb40fb789>] kthread_create_on_node+0x180/0x180
> [<ffffffffb4705e92>] ret_from_fork+0x42/0x70
> [<ffffffffb40fb6c0>] ? kthread_create_on_node+0x180/0x180
> Code: 00 0000 41 c7 85 7...