Displaying 20 results from an estimated 38 matches for "no_context".
2016 Nov 28
1
CentOS 6.4 tcp_fatretrans_alert causes panic
...ll of a sudden,
I'm not sure if an upgrade caused this problem. Here's what I got from
backtracing:
PID: 8136 TASK: ffff8803341aead0 CPU: 2 COMMAND: ""
#0 [ffff880028283610] panic at ffffffff815286b8
#1 [ffff880028283690] oops_end at ffffffff8152c8a2
#2 [ffff8800282836c0] no_context at ffffffff81046c1b
#3 [ffff880028283710] __bad_area_nosemaphore at ffffffff81046ea5
#4 [ffff880028283760] bad_area_nosemaphore at ffffffff81046f73
#5 [ffff880028283770] __do_page_fault at ffffffff810476d1
#6 [ffff880028283890] do_page_fault at ffffffff8152e7be
#7 [ffff8800282838c0] page_fault...
2012 Nov 26
1
kernel panic on Xen
...qd/1 Tainted: G D 3.2.23 #131
[ 100.973270] Call Trace:
[ 100.973273] [<ffffffff816674ae>] panic+0x91/0x1a2
[ 100.973278] [<ffffffff8100adb2>] ? check_events+0x12/0x20
[ 100.973282] [<ffffffff81673b0a>] oops_end+0xea/0xf0
[ 100.973286] [<ffffffff81666e6b>] no_context+0x214/0x223
[ 100.973291] [<ffffffff8113cf94>] ? kmem_cache_free+0x104/0x110
[ 100.973295] [<ffffffff8166704b>] __bad_area_nosemaphore+0x1d1/0x1f0
[ 100.973299] [<ffffffff8166707d>] bad_area_nosemaphore+0x13/0x15
[ 100.973304] [<ffffffff816763fb>] do_page_fault+0x35b...
2011 Feb 11
1
null pointer dereference in iov_iter_copy_from_user_atomic while updating rpm packages
...524.584847] [<c043da92>] ?
do_exit+0x1d0/0x62c
Feb 10 10:59:45 testbox kernel: [ 524.584850] [<c043bf68>] ?
kmsg_dump+0x3a/0xb6
Feb 10 10:59:45 testbox kernel: [ 524.584853] [<c07d555b>] ?
oops_end+0xa2/0xa8
Feb 10 10:59:45 testbox kernel: [ 524.584858] [<c07cc31f>] ?
no_context+0x128/0x130
Feb 10 10:59:45 testbox kernel: [ 524.584861] [<c07cc441>] ?
__bad_area_nosemaphore+0x11a/0x122
Feb 10 10:59:45 testbox kernel: [ 524.584884] [<f81fdd20>] ?
btrfs_block_rsv_release+0x51/0x57 [btrfs]
Feb 10 10:59:45 testbox kernel: [ 524.584888] [<c07cc460>] ?
bad...
2017 Jul 11
2
[regression drm/noveau] suspend to ram -> BOOM: exception RIP: drm_calc_vbltimestamp_from_scanoutpos+335
...3c6068f80 CPU: 7 COMMAND: "kworker/u16:26"
#0 [ffffc900039f76a0] machine_kexec at ffffffff810481fc
#1 [ffffc900039f76f0] __crash_kexec at ffffffff81109e3a
#2 [ffffc900039f77b0] crash_kexec at ffffffff8110adc9
#3 [ffffc900039f77c8] oops_end at ffffffff8101d059
#4 [ffffc900039f77e8] no_context at ffffffff81055ce5
#5 [ffffc900039f7838] do_page_fault at ffffffff81056c5b
#6 [ffffc900039f7860] page_fault at ffffffff81690a88
[exception RIP: report_bug+93]
RIP: ffffffff8167227d RSP: ffffc900039f7918 RFLAGS: 00010002
RAX: ffffffffa0229905 RBX: ffffffffa020af0f RCX: 00000000000...
2010 Sep 21
4
PROBLEM: [BISECTED] 2.6.35.5 xen domU panics just after the boot
...<ffffffff8103c5c6>] do_exit+0x6d/0x77e
[ 0.000000] [<ffffffff8133d43f>] ? _raw_spin_unlock_irqrestore+0x11/0x13
[ 0.000000] [<ffffffff8103ac55>] ? kmsg_dump+0x11e/0x139
[ 0.000000] [<ffffffff8100b936>] oops_end+0x8f/0x94
[ 0.000000] [<ffffffff810239cc>] no_context+0x1f4/0x203
[ 0.000000] [<ffffffff81023b65>] __bad_area_nosemaphore+0x18a/0x1ad
[ 0.000000] [<ffffffff81023b96>] bad_area_nosemaphore+0xe/0x10
[ 0.000000] [<ffffffff81023f10>] do_page_fault+0x115/0x229
[ 0.000000] [<ffffffff8133daf5>] page_fault+0x25/0x30
[...
2010 Sep 21
4
PROBLEM: [BISECTED] 2.6.35.5 xen domU panics just after the boot
...<ffffffff8103c5c6>] do_exit+0x6d/0x77e
[ 0.000000] [<ffffffff8133d43f>] ? _raw_spin_unlock_irqrestore+0x11/0x13
[ 0.000000] [<ffffffff8103ac55>] ? kmsg_dump+0x11e/0x139
[ 0.000000] [<ffffffff8100b936>] oops_end+0x8f/0x94
[ 0.000000] [<ffffffff810239cc>] no_context+0x1f4/0x203
[ 0.000000] [<ffffffff81023b65>] __bad_area_nosemaphore+0x18a/0x1ad
[ 0.000000] [<ffffffff81023b96>] bad_area_nosemaphore+0xe/0x10
[ 0.000000] [<ffffffff81023f10>] do_page_fault+0x115/0x229
[ 0.000000] [<ffffffff8133daf5>] page_fault+0x25/0x30
[...
2014 Sep 09
2
Re: CoreOS support
...id: 0 Tainted: G W --------------- T 2.6.32-30-pve #1
>> [ 7.775735] Call Trace:
>> [ 7.775735] [<ffffffff81072bd9>] ? add_taint+0x69/0x70
>> [ 7.775735] [<ffffffff8155c443>] ? oops_end+0x53/0xf0
>> [ 7.775735] [<ffffffff8154f0c7>] ? no_context+0x1eb/0x216
>> [ 7.775735] [<ffffffff810cc22e>] ? is_module_text_address+0xe/0x20
>> [ 7.775735] [<ffffffff8154f260>] ? __bad_area_nosemaphore+0x16e/0x18d
>> [ 7.775735] [<ffffffff8154f292>] ? bad_area_nosemaphore+0x13/0x15
>> [ 7.775735] [&...
2007 Apr 18
0
[PATCH 2/5] Add subarch mmu queue flush hook
...==========================
--- linux-2.6.13.orig/arch/i386/mm/fault.c 2005-08-24 09:30:53.000000000 -0700
+++ linux-2.6.13/arch/i386/mm/fault.c 2005-08-24 09:43:27.000000000 -0700
@@ -562,6 +562,15 @@ vmalloc_fault:
pte_k = pte_offset_kernel(pmd_k, address);
if (!pte_present(*pte_k))
goto no_context;
+
+ /*
+ * We have just updated this root with a copy of the kernel
+ * pmd. To return without flushing would introduce a fault
+ * loop if running on a hypervisor which uses queued page
+ * table updates.
+ */
+ update_mmu_cache(vma, address, pte_k);
+
return;
}
}
Index: linux-...
2007 Apr 18
0
[PATCH 2/5] Add subarch mmu queue flush hook
...==========================
--- linux-2.6.13.orig/arch/i386/mm/fault.c 2005-08-24 09:30:53.000000000 -0700
+++ linux-2.6.13/arch/i386/mm/fault.c 2005-08-24 09:43:27.000000000 -0700
@@ -562,6 +562,15 @@ vmalloc_fault:
pte_k = pte_offset_kernel(pmd_k, address);
if (!pte_present(*pte_k))
goto no_context;
+
+ /*
+ * We have just updated this root with a copy of the kernel
+ * pmd. To return without flushing would introduce a fault
+ * loop if running on a hypervisor which uses queued page
+ * table updates.
+ */
+ update_mmu_cache(vma, address, pte_k);
+
return;
}
}
Index: linux-...
2009 Nov 08
9
2.6.31 xenified kernel - not ready for production
Hi,
I just want to know if somebody use 2.6.31.4 xenified kernel (aka
OpenSUSE) in production?
We have been testing it on new Nehalem Xeon server for few weeks w/o
any problem.
But as soon we tried it on production machine - after several
production domUs started - hard OS failure.
We had to switch back to 2.6.18.8 - xen stock kernel.
Peter
_______________________________________________
2017 Jul 11
0
[regression drm/noveau] suspend to ram -> BOOM: exception RIP: drm_calc_vbltimestamp_from_scanoutpos+335
...ND: "kworker/u16:26"
> #0 [ffffc900039f76a0] machine_kexec at ffffffff810481fc
> #1 [ffffc900039f76f0] __crash_kexec at ffffffff81109e3a
> #2 [ffffc900039f77b0] crash_kexec at ffffffff8110adc9
> #3 [ffffc900039f77c8] oops_end at ffffffff8101d059
> #4 [ffffc900039f77e8] no_context at ffffffff81055ce5
> #5 [ffffc900039f7838] do_page_fault at ffffffff81056c5b
> #6 [ffffc900039f7860] page_fault at ffffffff81690a88
> [exception RIP: report_bug+93]
> RIP: ffffffff8167227d RSP: ffffc900039f7918 RFLAGS: 00010002
> RAX: ffffffffa0229905 RBX: ffffffffa...
2014 Mar 31
1
OOPS in hvc / virtconsole
....474098] [<ffffffff8106fca7>] do_exit+0x6a7/0xa20
[ 0.474098] [<ffffffff810be9f8>] ? console_unlock+0x1e8/0x3f0
[ 0.474098] [<ffffffff81d69c60>] ? vty_init+0x174/0x174
[ 0.474098] [<ffffffff8168f4ac>] oops_end+0x9c/0xe0
[ 0.474098] [<ffffffff81683092>] no_context+0x27e/0x28b
[ 0.474098] [<ffffffff81d69c60>] ? vty_init+0x174/0x174
[ 0.474098] [<ffffffff81683112>] __bad_area_nosemaphore+0x73/0x1ca
[ 0.474098] [<ffffffff813173af>] ? add_uevent_var+0x6f/0x110
[ 0.474098] [<ffffffff81d69c60>] ? vty_init+0x174/0x174
[ 0....
2014 Mar 31
1
OOPS in hvc / virtconsole
....474098] [<ffffffff8106fca7>] do_exit+0x6a7/0xa20
[ 0.474098] [<ffffffff810be9f8>] ? console_unlock+0x1e8/0x3f0
[ 0.474098] [<ffffffff81d69c60>] ? vty_init+0x174/0x174
[ 0.474098] [<ffffffff8168f4ac>] oops_end+0x9c/0xe0
[ 0.474098] [<ffffffff81683092>] no_context+0x27e/0x28b
[ 0.474098] [<ffffffff81d69c60>] ? vty_init+0x174/0x174
[ 0.474098] [<ffffffff81683112>] __bad_area_nosemaphore+0x73/0x1ca
[ 0.474098] [<ffffffff813173af>] ? add_uevent_var+0x6f/0x110
[ 0.474098] [<ffffffff81d69c60>] ? vty_init+0x174/0x174
[ 0....
2017 Oct 18
2
Null deference panic in CentOS-6.5
...OS-6.5:
crash> bt
PID: 106074 TASK: ffff8839c1e32ae0 CPU: 4 COMMAND: "flushd4[cbd-sd-"
#0 [ffff8839c2a91900] machine_kexec at ffffffff81038fa9
#1 [ffff8839c2a91960] crash_kexec at ffffffff810c5992
#2 [ffff8839c2a91a30] oops_end at ffffffff81515c90
#3 [ffff8839c2a91a60] no_context at ffffffff81049f1b
#4 [ffff8839c2a91ab0] __bad_area_nosemaphore at ffffffff8104a1a5
#5 [ffff8839c2a91b00] bad_area_nosemaphore at ffffffff8104a273
#6 [ffff8839c2a91b10] __do_page_fault at ffffffff8104a9bf
#7 [ffff8839c2a91c30] do_page_fault at ffffffff81517bae
#8 [ffff8839c2a91c60]...
2006 Mar 14
12
[RFC] VMI for Xen?
I''m sure everyone has seen the drop of VMI patches for Linux at this
point, but just in case, the link is included below.
I''ve read this version of the VMI spec and have made my way through most
of the patches. While I wasn''t really that impressed with the first
spec wrt Xen, the second version seems to be much more palatable.
Specifically, the code inlining and
2010 Dec 08
2
WG: Dom0 kernel crashes when dom0_mem= is used!
...5.020828] [<ffffffff8130a563>] ? printk+0x4e/0x5b
[ 5.020835] [<ffffffff81051f7b>] ? do_exit+0x72/0x6b5
[ 5.020839] [<ffffffff8100ec72>] ? check_events+0x12/0x20
[ 5.020845] [<ffffffff8130d1bd>] ? oops_end+0xaf/0xb4
[ 5.020851] [<ffffffff8103338c>] ? no_context+0x1e9/0x1f8
[ 5.020856] [<ffffffff81033541>] ? __bad_area_nosemaphore+0x1a6/0x1ca
[ 5.020860] [<ffffffff8130e541>] ? do_page_fault+0x2b/0x2fc
[ 5.020865] [<ffffffff8130c695>] ? page_fault+0x25/0x30
[ 5.020870] [<ffffffff810badce>] ? __alloc_pages_nodemask+0x8...
2014 Sep 09
2
Re: CoreOS support
...7.775735] Pid: 297, comm: mount veid: 0 Tainted: G W --------------- T 2.6.32-30-pve #1
[ 7.775735] Call Trace:
[ 7.775735] [<ffffffff81072bd9>] ? add_taint+0x69/0x70
[ 7.775735] [<ffffffff8155c443>] ? oops_end+0x53/0xf0
[ 7.775735] [<ffffffff8154f0c7>] ? no_context+0x1eb/0x216
[ 7.775735] [<ffffffff810cc22e>] ? is_module_text_address+0xe/0x20
[ 7.775735] [<ffffffff8154f260>] ? __bad_area_nosemaphore+0x16e/0x18d
[ 7.775735] [<ffffffff8154f292>] ? bad_area_nosemaphore+0x13/0x15
[ 7.775735] [<ffffffff81047bee>] ? __do_page...
2007 Apr 18
1
[RFC, PATCH 19/24] i386 Vmi mmu changes
...-rc5/arch/i386/mm/fault.c
===================================================================
--- linux-2.6.16-rc5.orig/arch/i386/mm/fault.c 2006-03-10 12:55:05.000000000 -0800
+++ linux-2.6.16-rc5/arch/i386/mm/fault.c 2006-03-10 15:57:08.000000000 -0800
@@ -552,6 +552,13 @@ vmalloc_fault:
goto no_context;
set_pmd(pmd, *pmd_k);
+ /*
+ * Needed. We have just updated this root with a copy of
+ * the kernel pmd. To return without flushing would
+ * introduce a fault loop.
+ */
+ update_mmu_cache(NULL, pmd, pmd_k->pmd);
+
pte_k = pte_offset_kernel(pmd_k, address);
if (!pte_pres...
2007 Apr 18
1
[RFC, PATCH 19/24] i386 Vmi mmu changes
...-rc5/arch/i386/mm/fault.c
===================================================================
--- linux-2.6.16-rc5.orig/arch/i386/mm/fault.c 2006-03-10 12:55:05.000000000 -0800
+++ linux-2.6.16-rc5/arch/i386/mm/fault.c 2006-03-10 15:57:08.000000000 -0800
@@ -552,6 +552,13 @@ vmalloc_fault:
goto no_context;
set_pmd(pmd, *pmd_k);
+ /*
+ * Needed. We have just updated this root with a copy of
+ * the kernel pmd. To return without flushing would
+ * introduce a fault loop.
+ */
+ update_mmu_cache(NULL, pmd, pmd_k->pmd);
+
pte_k = pte_offset_kernel(pmd_k, address);
if (!pte_pres...
2011 Jun 11
0
ext3 and btrfs various Oops and kernel BUGs
...gt;] do_exit+0x244/0x6da
Jun 10 14:50:23 mithrandir kernel: [40871.732005] [<ffffffff810335c9>] ? kmsg_dump+0xcb/0xda
Jun 10 14:50:23 mithrandir kernel: [40871.732005] [<ffffffff8100492e>] oops_end+0x89/0x8e
Jun 10 14:50:23 mithrandir kernel: [40871.732005] [<ffffffff81396a4c>] no_context+0x1fe/0x20d
Jun 10 14:50:23 mithrandir kernel: [40871.732005] [<ffffffffa0d9397f>] ? lookup_extent_mapping+0xaf/0xc2 [btrfs]
Jun 10 14:50:23 mithrandir kernel: [40871.732005] [<ffffffff81396be9>] __bad_area_nosemaphore+0x18e/0x1b1
Jun 10 14:50:23 mithrandir kernel: [40871.732005] [&l...