Displaying 13 results from an estimated 13 matches for "_spin_lock_irq".
Did you mean:
spin_lock_irq
2011 Mar 10
8
Kernel panic with 2.6.32-30 under network activity
...<ffffffff81241140>] ? sock_close+0x22/0x26
[469390.127097] [<ffffffff810ef879>] ? __fput+0x100/0x1af
[469390.127106] [<ffffffff810eccb6>] ? filp_close+0x5b/0x62
[469390.127116] [<ffffffff8104f878>] ? put_files_struct+0x64/0xc1
[469390.127127] [<ffffffff812fbb02>] ? _spin_lock_irq+0x7/0x22
[469390.127135] [<ffffffff81051141>] ? do_exit+0x236/0x6c6
[469390.127144] [<ffffffff8100c241>] ?
__raw_callee_save_xen_pud_val+0x11/0x1e
[469390.127154] [<ffffffff8100e22f>] ? xen_restore_fl_direct_end+0x0/0x1
[469390.127163] [<ffffffff8100c205>] ?
__raw_callee...
2011 Mar 10
8
Kernel panic with 2.6.32-30 under network activity
...<ffffffff81241140>] ? sock_close+0x22/0x26
[469390.127097] [<ffffffff810ef879>] ? __fput+0x100/0x1af
[469390.127106] [<ffffffff810eccb6>] ? filp_close+0x5b/0x62
[469390.127116] [<ffffffff8104f878>] ? put_files_struct+0x64/0xc1
[469390.127127] [<ffffffff812fbb02>] ? _spin_lock_irq+0x7/0x22
[469390.127135] [<ffffffff81051141>] ? do_exit+0x236/0x6c6
[469390.127144] [<ffffffff8100c241>] ?
__raw_callee_save_xen_pud_val+0x11/0x1e
[469390.127154] [<ffffffff8100e22f>] ? xen_restore_fl_direct_end+0x0/0x1
[469390.127163] [<ffffffff8100c205>] ?
__raw_callee...
2013 Apr 07
2
qemu-kvm high cpu usage with idle windows guest
Windows 2k8 R2 guest with updated virtio drivers is idle inside but on
the host qemu-kvm process uses 7-15% cpu.
Things that have been tried without any significant success
- removed tablet device
- manually set cpu topology for cores per socket
Is it just me or the qemu-kvm has little tolerance with windows?
2013 Apr 07
2
qemu-kvm high cpu usage with idle windows guest
Windows 2k8 R2 guest with updated virtio drivers is idle inside but on
the host qemu-kvm process uses 7-15% cpu.
Things that have been tried without any significant success
- removed tablet device
- manually set cpu topology for cores per socket
Is it just me or the qemu-kvm has little tolerance with windows?
2007 Jul 30
3
kmod-drbd-smp (2.6.9-55.0.2.EL) has unknown symbols (kmod-drbd not).
...bd.ko needs unknown symbol _spin_unlock_irq
WARNING: /lib/modules/2.6.9-55.0.2.EL/extra/drbd.ko needs unknown symbol _spin_unlock
WARNING: /lib/modules/2.6.9-55.0.2.EL/extra/drbd.ko needs unknown symbol _spin_unlock_irqrestore
WARNING: /lib/modules/2.6.9-55.0.2.EL/extra/drbd.ko needs unknown symbol _spin_lock_irq
WARNING: /lib/modules/2.6.9-55.0.2.EL/extra/drbd.ko needs unknown symbol del_timer_sync
WARNING: /lib/modules/2.6.9-55.0.2.EL/extra/drbd.ko needs unknown symbol _spin_lock_irqsave
WARNING: /lib/modules/2.6.9-55.0.2.EL/extra/drbd.ko needs unknown symbol _spin_lock
Installed: kmod-drbd.i686 0:0.7.24...
2012 Mar 07
1
[HELP!]GFS2 in the xen 4.1.2 does not work!
[This email is either empty or too large to be displayed at this time]
2013 Oct 08
1
OT: errors compiling kernel module as a rpm package
...(__secpath_destroy) = 0x430555cc kernel(__skb_checksum_complete)
= 0xcf0b750c kernel(__skb_warn_lro_forwarding) = 0x4d288688
kernel(__stack_chk_fail) = 0xf0fdf6cb kernel(__wake_up) = 0x642e54ac
kernel(_read_lock) = 0x1a75caa3 kernel(_spin_lock) = 0x973873ab
kernel(_spin_lock_bh) = 0x93cbd1ec kernel(_spin_lock_irq) = 0xecde1418
kernel(_spin_lock_irqsave) = 0x712aa29b kernel(_spin_unlock_bh) =
0x3aa1dbcf kernel(_spin_unlock_irqrestore) = 0x4b07e779
kernel(add_timer) = 0x46085e4f kernel(alloc_netdev_mq) = 0xafbc0d15
kernel(autoremove_wake_function) = 0xc8b57c27 kernel(boot_tvec_bases)
= 0xfc6256b9 kernel(call_...
2012 Feb 27
0
segfaulting tapdisk2 process leads to kernel oops
...8] [<ffffffff810d1394>] ? remove_vma+0x2c/0x72
[1527071.224008] [<ffffffff810d1503>] ? exit_mmap+0x129/0x148
[1527071.224008] [<ffffffff8104cc75>] ? mmput+0x3c/0xdf
[1527071.224008] [<ffffffff8105087a>] ? exit_mm+0x102/0x10d
[1527071.224008] [<ffffffff8132448a>] ? _spin_lock_irq+0x7/0x22
[1527071.224008] [<ffffffff810522a3>] ? do_exit+0x1f8/0x6c6
[1527071.224008] [<ffffffff8105d5bb>] ? __dequeue_signal+0xfb/0x124
[1527071.224008] [<ffffffff8100eccf>] ? xen_restore_fl_direct_end+0x0/0x1
[1527071.224008] [<ffffffff810e7ebd>] ? kmem_cache_free+0x72...
2010 Sep 17
1
General protection fault
...en+0x1e1/0x220
Sep 17 15:26:18 box6 kernel: [ 1948.826138] [<ffffffffa0196446>]
fbcon_blank+0x156/0x250 [fbcon]
Sep 17 15:26:18 box6 kernel: [ 1948.826145] [<ffffffff810397a9>] ?
default_spin_lock_flags+0x9/0x10
Sep 17 15:26:18 box6 kernel: [ 1948.826148] [<ffffffff8155aa2f>] ?
_spin_lock_irqsave+0x2f/0x40
Sep 17 15:26:18 box6 kernel: [ 1948.826154] [<ffffffff81076c7c>] ?
lock_timer_base+0x3c/0x70
Sep 17 15:26:18 box6 kernel: [ 1948.826157] [<ffffffff81077d17>] ?
mod_timer+0x147/0x230
Sep 17 15:26:18 box6 kernel: [ 1948.826161] [<ffffffff8134350e>]
do_unblank_screen...
2010 Mar 11
17
Panic on boot on Sun Blade 6270
Hi All,
I''m getting the attached panic in the hypervisor on a Sun 6270 running
xen-testing.hg changeset 19913:6063c16aeeaa. I can run xen-3.3.1 from
the standard SLES 11 distribution (which says that it''s changeset
18546 but it has many patches). Any ideas?
thanks,
dan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
2009 Apr 03
1
Memory Leak with stock Squirrelmail, PHP, mysql, apache since 5.3
..._mod:dm_any_congested+0x38/0x3f
Apr 2 17:18:28 s_local at webmail kernel:[<ffffffff80213c47>]
filemap_nopage+0x148/0x322
Apr 2 17:18:28 s_local at webmail kernel:[<ffffffff80208db9>]
__handle_mm_fault+0x440/0x11f6
Apr 2 17:18:28 s_local at webmail kernel:[<ffffffff802639f9>]
_spin_lock_irqsave+0x9/0x14
Apr 2 17:18:28 s_local at webmail kernel:[<ffffffff802666ef>]
do_page_fault+0xf7b/0x12e0
Apr 2 17:18:28 s_local at webmail kernel:[<ffffffff8025f82b>]
error_exit+0x0/0x6e
Apr 2 17:18:28 s_local at webmail kernel:
Apr 2 17:18:28 s_local at webmail kernel:Mem-info:
Apr...
2013 Aug 22
13
Lustre buffer cache causes large system overhead.
...ck to normal with a run time at 150 secs.
I''ve created an infographic from our ganglia graphs for the above scenario.
https://dl.dropboxusercontent.com/u/23468442/misc/lustre_bc_overhead.png
Attached is an excerpt from perf top indicating that the kernel routine taking
the most time is _spin_lock_irqsave if that means anything to anyone.
Things tested:
It does not seem to matter if we mount lustre over infiniband or ethernet.
Filling the buffer cache with files from an NFS filesystem does not degrade
performance.
Filling the buffer cache with one large file does not give degraded performa...
2010 Jan 21
47
What is the state of blktap2?
I''m currently working on moving storage services into their own domain
and I''ve been looking at blktap2. I''ve been trying to get an image
mounted with blktap2 and for some odd reason and tapdisk2 keeps hanging
instead of quitting at the end. I haven''t removed any of the storage
startup code at this point so everything should be as it normally is in
xen-unstable.