Displaying 20 results from an estimated 86 matches for "_raw_spin_lock".
2012 Oct 22
4
xen_evtchn_do_upcall
...us | rcu_exit_nohz();
1) 0.431 us | }
1) 0.064 us | idle_cpu();
1) 1.152 us | }
1) | __xen_evtchn_do_upcall() {
1) 0.119 us | irq_to_desc();
1) | handle_edge_irq() {
1) 0.107 us | _raw_spin_lock();
1) | ack_dynirq() {
1) | evtchn_from_irq() {
1) | info_for_irq() {
1) | irq_get_irq_data() {
1) 0.052 us | irq_to_desc();
1) 0.418 us | }
1)...
2013 May 07
2
[PATCH] KVM: Fix kvm_irqfd_init initialization
...el.ko),
kvm_arch_init() will fail with -EEXIST, then kvm_irqfd_exit() will be
called on the error handling path. This way, the kvm_irqfd system will
not be ready.
This patch fix the following:
BUG: unable to handle kernel NULL pointer dereference at (null)
IP: [<ffffffff81c0721e>] _raw_spin_lock+0xe/0x30
PGD 0
Oops: 0002 [#1] SMP
Modules linked in: vhost_net
CPU 6
Pid: 4257, comm: qemu-system-x86 Not tainted 3.9.0-rc3+ #757 Dell Inc. OptiPlex 790/0V5HMK
RIP: 0010:[<ffffffff81c0721e>] [<ffffffff81c0721e>] _raw_spin_lock+0xe/0x30
RSP: 0018:ffff880221721cc8 EFLAGS: 00010046
RAX:...
2013 May 07
2
[PATCH] KVM: Fix kvm_irqfd_init initialization
...el.ko),
kvm_arch_init() will fail with -EEXIST, then kvm_irqfd_exit() will be
called on the error handling path. This way, the kvm_irqfd system will
not be ready.
This patch fix the following:
BUG: unable to handle kernel NULL pointer dereference at (null)
IP: [<ffffffff81c0721e>] _raw_spin_lock+0xe/0x30
PGD 0
Oops: 0002 [#1] SMP
Modules linked in: vhost_net
CPU 6
Pid: 4257, comm: qemu-system-x86 Not tainted 3.9.0-rc3+ #757 Dell Inc. OptiPlex 790/0V5HMK
RIP: 0010:[<ffffffff81c0721e>] [<ffffffff81c0721e>] _raw_spin_lock+0xe/0x30
RSP: 0018:ffff880221721cc8 EFLAGS: 00010046
RAX:...
2016 Aug 22
2
Nested KVM issue
No luck with qemu-kvm-ev, the behavior is the same. Running perf record -a
-g on the baremetal shows that most of the CPU time is in _raw_spin_lock
Children Self Command Shared Object Symbol
- 93.62% 93.62% qemu-kvm
[kernel.kallsyms] [k] _raw_spin_lock
- _raw_spin_lock
+ 45.30% kvm_mmu_sync_roots
+ 28.49% kvm_mmu_load
+ 25.00...
2015 Mar 30
2
[PATCH 0/9] qspinlock stuff -v15
...lt ticket spinlock implementation.
The %CPU times spent on spinlock contention (from perf) with the
performance governor and the intel_pstate driver were:
Kernel Function 3.19 kernel 3.19-qspinlock kernel
--------------- ----------- ---------------------
At 500 users:
_raw_spin_lock* 28.23% 2.25%
queue_spin_lock_slowpath N/A 4.05%
At 1000 users:
_raw_spin_lock* 23.21% 2.25%
queue_spin_lock_slowpath N/A 4.42%
At 1500 users:
_raw_spin_lock* 29.07% 2.24%
queue_spin_lock_slowpath...
2015 Mar 30
2
[PATCH 0/9] qspinlock stuff -v15
...lt ticket spinlock implementation.
The %CPU times spent on spinlock contention (from perf) with the
performance governor and the intel_pstate driver were:
Kernel Function 3.19 kernel 3.19-qspinlock kernel
--------------- ----------- ---------------------
At 500 users:
_raw_spin_lock* 28.23% 2.25%
queue_spin_lock_slowpath N/A 4.05%
At 1000 users:
_raw_spin_lock* 23.21% 2.25%
queue_spin_lock_slowpath N/A 4.42%
At 1500 users:
_raw_spin_lock* 29.07% 2.24%
queue_spin_lock_slowpath...
2010 Jul 10
1
deadlock possiblity introduced by "drm/nouveau: use drm_mm in preference to custom code doing the same thing"
...amp;(&dev_priv->context_switch_lock)->rlock){-.....}
[ 2417.746680] ... which became HARDIRQ-irq-safe at:
[ 2417.746682] [<ffffffff8109739e>] __lock_acquire+0x671/0x8f4
[ 2417.746685] [<ffffffff81097769>] lock_acquire+0x148/0x18d
[ 2417.746688] [<ffffffff8143b2cd>] _raw_spin_lock_irqsave+0x41/0x53
[ 2417.746692] [<ffffffffa00b3072>] nouveau_irq_handler+0x56/0xa48 [nouveau]
[ 2417.746698] [<ffffffff810a7b3b>] handle_IRQ_event+0xec/0x25d
[ 2417.746702] [<ffffffff810a98e1>] handle_fasteoi_irq+0x92/0xd2
[ 2417.746705] [<ffffffff81032953>] handle_...
2007 Jun 13
2
HTB deadlock
...soft lockup detected on CPU#1!
[<c013c890>] softlockup_tick+0x93/0xc2
[<c0127585>] update_process_times+0x26/0x5c
[<c0111cd5>] smp_apic_timer_interrupt+0x97/0xb2
[<c0104373>] apic_timer_interrupt+0x1f/0x24
[<c01c007b>] blk_do_ordered+0x70/0x27e
[<c01ce788>] _raw_spin_lock+0xaa/0x13e
[<f8b8b422>] htb_rate_timer+0x18/0xc4 [sch_htb]
[<c0127539>] run_timer_softirq+0x163/0x189
[<f8b8b40a>] htb_rate_timer+0x0/0xc4 [sch_htb]
[<c0123315>] __do_softirq+0x70/0xdb
[<c01233bb>] do_softirq+0x3b/0x42
[<c0111cda>] smp_apic_timer_interrupt+...
2016 Aug 22
0
Nested KVM issue
...and http://elrepo.org/tiki/kernel-lt
and use them for some comparison testing.
Cheers,
---
Adi Pircalabu
On 22-08-2016 18:31, Laurentiu Soica wrote:
> No luck with qemu-kvm-ev, the behavior is the same. Running perf
> record -a -g on the baremetal shows that most of the CPU time is in
> _raw_spin_lock
>
> Children Self Command Shared Object
> Symbol
> - 93.62%
> 93.62% qemu-kvm [kernel.kallsyms] [k]
> _raw_spin_lock
>
>
> - _raw_spin_lock
>
>
> + 45.30% kvm_...
2011 Sep 27
2
high CPU usage and low perf
...e9bf3>] ? run_clustered_refs+0x370/0x682 [btrfs]
[<ffffffffa032d201>] ? btrfs_find_ref_cluster+0xd/0x13c [btrfs]
[<ffffffffa02e9fd6>] ? btrfs_run_delayed_refs+0xd1/0x17c [btrfs]
[<ffffffffa02f8467>] ? btrfs_commit_transaction+0x38f/0x709 [btrfs]
[<ffffffff8136f6e6>] ? _raw_spin_lock+0xe/0x10
[<ffffffffa02f79fe>] ? join_transaction.clone.23+0xc1/0x200 [btrfs]
[<ffffffff81068ffb>] ? wake_up_bit+0x2a/0x2a
[<ffffffffa02f28fd>] ? transaction_kthread+0x175/0x22a [btrfs]
[<ffffffffa02f2788>] ? btrfs_congested_fn+0x86/0x86 [btrfs]
[<ffffffff81068b2c>...
2011 Sep 09
1
[PATCH] drm/nouveau: initialize chan->fence.lock before use
...because it calls nouveau_channel_idle->nouveau_fence_update which uses
fence lock.
BUG: spinlock bad magic on CPU#0, test/24134
lock: ffff88019f90dba8, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
Pid: 24134, comm: test Not tainted 3.0.0-nv+ #800
Call Trace:
spin_bug+0x9c/0xa3
do_raw_spin_lock+0x29/0x13c
_raw_spin_lock+0x1e/0x22
nouveau_fence_update+0x2d/0xf1
nouveau_channel_idle+0x22/0xa0
nouveau_channel_put_unlocked+0x84/0x1bd
nouveau_channel_put+0x20/0x24
nouveau_channel_alloc+0x4ec/0x585
nouveau_ioctl_fifo_alloc+0x50/0x130
drm_ioctl+0x289/0x361
do_vfs_ioctl+0x4dd/0x52c
sys_...
2016 May 25
3
[PATCH] x86/paravirt: Do not trace _paravirt_ident_*() functions
...00000000d3e71000 CR4: 00000000001406e0
Stack:
ffff8800d4aefba0 ffffffff81cc5f47 ffff8800d4aefc60 ffffffff8122c15b
ffff8800d4aefcb0 ffff8800d4aefbd0 ffffffff811bf4cb 0000000000000002
0000000000000015 ffff8800d2276050 80000000c0fd8867 ffffea0000008030
Call Trace:
[<ffffffff81cc5f47>] _raw_spin_lock+0x27/0x30
[<ffffffff8122c15b>] handle_pte_fault+0x13db/0x16b0
[<ffffffff811bf4cb>] ? function_trace_call+0x15b/0x180
[<ffffffff8122ad85>] ? handle_pte_fault+0x5/0x16b0
[<ffffffff8122e322>] handle_mm_fault+0x312/0x670
[<ffffffff81231068>] ? find_vma+0x68/0x70...
2016 May 25
3
[PATCH] x86/paravirt: Do not trace _paravirt_ident_*() functions
...00000000d3e71000 CR4: 00000000001406e0
Stack:
ffff8800d4aefba0 ffffffff81cc5f47 ffff8800d4aefc60 ffffffff8122c15b
ffff8800d4aefcb0 ffff8800d4aefbd0 ffffffff811bf4cb 0000000000000002
0000000000000015 ffff8800d2276050 80000000c0fd8867 ffffea0000008030
Call Trace:
[<ffffffff81cc5f47>] _raw_spin_lock+0x27/0x30
[<ffffffff8122c15b>] handle_pte_fault+0x13db/0x16b0
[<ffffffff811bf4cb>] ? function_trace_call+0x15b/0x180
[<ffffffff8122ad85>] ? handle_pte_fault+0x5/0x16b0
[<ffffffff8122e322>] handle_mm_fault+0x312/0x670
[<ffffffff81231068>] ? find_vma+0x68/0x70...
2013 Feb 21
1
[PATCH] the ac->ac_allow_chain_relink=0 won't disable group relink
From: "Xiaowei.Hu" <xiaowei.hu at oracle.com>
ocfs2_block_group_alloc_discontig() disables chain relink by setting
ac->ac_allow_chain_relink = 0 because it grabs clusters from multiple
cluster groups. It doesn't keep the credits for all chain relink,but
ocfs2_claim_suballoc_bits overrides this in this call trace:
2018 Jun 16
2
DM 3.6.25 -> 4.x
...xf3/0x210
[ 9369.174860] [<ffffffff81038e07>] local_apic_timer_interrupt+0x37/0x60
[ 9369.174861] [<ffffffff8103963c>] smp_apic_timer_interrupt+0x3c/0x50
[ 9369.174863] [<ffffffff8196353b>] apic_timer_interrupt+0x6b/0x70
[ 9369.174864] <EOI> [<ffffffff81962587>] ? _raw_spin_lock+0x37/0x40
[ 9369.174868] [<ffffffff81845880>] unix_state_double_lock+0x60/0x70
[ 9369.174871] [<ffffffff8184924b>] unix_dgram_connect+0x8b/0x2e0
[ 9369.174873] [<ffffffff8176fb77>] SYSC_connect+0xc7/0x100
[ 9369.174875] [<ffffffff81770789>] SyS_connect+0x9/0x10
[ 9369.17...
2018 Jun 15
3
DM 3.6.25 -> 4.x
Am 2018-06-15 um 15:16 schrieb Stefan G. Weichinger via samba:
> Am 2018-06-15 um 14:44 schrieb Stefan G. Weichinger via samba:
>
>> on my way now ... glibc new, samba-4.5.16 for a start
>>
>> we now get:
>>
>> [2018/06/15 14:43:09.481113, 0]
>> ../source3/winbindd/winbindd_group.c:45(fill_grent)
>> Failed to find domain 'Unix Group'.
2013 Oct 21
1
Kernel BUG in ocfs2_get_clusters_nocache
...ffffffffa026eae0>] ? ocfs2_dio_end_io+0x110/0x110
[ocfs2]
[Fri Oct 18 10:52:28 2013] [<ffffffffa026e9d0>] ? ocfs2_direct_IO+0x80/0x80
[ocfs2]
[Fri Oct 18 10:52:28 2013] [<ffffffff81146e2b>] generic_file_aio_read+0x6bb/0x720
[Fri Oct 18 10:52:28 2013] [<ffffffff8172168e>] ? _raw_spin_lock+0xe/0x20
[Fri Oct 18 10:52:28 2013] [<ffffffffa02843db>] ?
__ocfs2_cluster_unlock.isra.32+0x9b/0xe0 [ocfs2]
[Fri Oct 18 10:52:28 2013] [<ffffffffa02847a9>] ? ocfs2_inode_unlock+0xb9/0x130
[ocfs2]
[Fri Oct 18 10:52:28 2013] [<ffffffffa028dcf9>] ocfs2_file_aio_read+0xd9/0x3c0...
2023 Mar 05
1
ocfs2 xattr
...387091] CR2: 0000000000000000 CR3: 000000003cfe2003 CR4: 0000000000370ef0
[ 27.387111] Call Trace:
[ 27.387130] <TASK>
[ 27.387141] ocfs2_calc_xattr_init+0x7d/0x330 [ocfs2]
[ 27.387382] ocfs2_mknod+0x471/0x1020 [ocfs2]
[ 27.387471] ? preempt_count_add+0x6a/0xa0
[ 27.387487] ? _raw_spin_lock+0x13/0x40
[ 27.387506] ocfs2_mkdir+0x44/0x130 [ocfs2]
[ 27.387583] ? security_inode_mkdir+0x3e/0x70
[ 27.387598] vfs_mkdir+0x9c/0x140
[ 27.387617] do_mkdirat+0x142/0x170
[ 27.387631] __x64_sys_mkdirat+0x47/0x80
[ 27.387643] do_syscall_64+0x58/0xc0
[ 27.387659] ? vfs_fstatat+0x5...
2013 Mar 27
0
OCFS2 issues reports, any ideads or patches, Thanks
...do_filp_open+0x42/0xa0
Mar 27 10:54:08 cvk-7 kernel: [ 361.374751] [<ffffffff81318ce1>] ? strncpy_from_user+0x31/0x40
Mar 27 10:54:08 cvk-7 kernel: [ 361.374755] [<ffffffff81182c0a>] ? do_getname+0x10a/0x180
Mar 27 10:54:08 cvk-7 kernel: [ 361.374759] [<ffffffff8165c46e>] ? _raw_spin_lock+0xe/0x20
Mar 27 10:54:08 cvk-7 kernel: [ 361.374764] [<ffffffff81194b67>] ? alloc_fd+0xf7/0x150
Mar 27 10:54:08 cvk-7 kernel: [ 361.374769] [<ffffffff81176f6d>] do_sys_open+0xed/0x220
Mar 27 10:54:08 cvk-7 kernel: [ 361.374773] [<ffffffff81179175>] ? fput+0x25/0x30
Mar 27 10...
2013 Feb 25
4
WARNING: at fs/btrfs/inode.c:2165 btrfs_orphan_commit_root+0xcb/0xdf()
...n_slowpath_common+0x7e/0x96
[<ffffffff811f75c5>] ? alloc_extent_state+0x59/0xa4
[<ffffffff81040f01>] warn_slowpath_null+0x15/0x17
[<ffffffff811e972f>] btrfs_orphan_commit_root+0xcb/0xdf
[<ffffffff811e3954>] commit_fs_roots.isra.24+0x99/0x153
[<ffffffff814c1ed6>] ? _raw_spin_lock+0x1b/0x1f
[<ffffffff814c2059>] ? _raw_spin_unlock+0x27/0x32
[<ffffffff811e47e8>] btrfs_commit_transaction+0x45a/0x954
[<ffffffff8105d4be>] ? add_wait_queue+0x44/0x44
[<ffffffff811de8a6>] transaction_kthread+0xe7/0x18a
[<ffffffff811de7bf>] ? try_to_freeze+0x33/0x33...