Christopher S. Aker
2008-Mar-03 00:44 UTC
[Xen-devel] unable to handle kernel paging request at virtual address
Xen : 3.2.0 64bit dom0: 2.6.16.33 32pae domU: linux-2.6.18-xen.hg @ 416:08e85e79c65d 32pae Our server infrastructure automatically logs console output from domUs that panic. I have available to me another dozen or so BUG outputs similar but not identical to the example below, if this piques anyone''s interest... BUG: unable to handle kernel paging request at virtual address 915536e9 printing eip: 03e45000 -> *pde = 00000002:d2d46027 15b9a000 -> *pme = 00000000:00000000 Oops: 0000 [#1] SMP Modules linked in: CPU: 0 EIP: 0061:[<c01750f4>] Not tainted VLI EFLAGS: 00010286 (2.6.18.8-domU-linode7 #1) EIP is at iput+0xd/0x6b eax: 915536c9 ebx: c6e130d4 ecx: c6e130ec edx: c6e130ec esi: 0000002f edi: 00000000 ebp: d61b983c esp: d5e2fee4 ds: 007b es: 007b ss: 0069 Process kswapd0 (pid: 114, ti=d5e2e000 task=d5dc0ab0 task.ti=d5e2e000) Stack: d20b9214 c0173edf d20b9214 0000002f c0174031 00000080 00000050 00003d54 c137eac0 00000088 00013c51 c0174094 c01439a0 c012eb7b 00000000 c0513400 0000000c 00000000 c05c6fe0 00000080 000000d0 00000000 00000002 c0513400 Call Trace: [<c0173edf>] prune_one_dentry+0x54/0x75 [<c0174031>] prune_dcache+0x131/0x15b [<c0174094>] shrink_dcache_memory+0x39/0x3b [<c01439a0>] shrink_slab+0x111/0x186 [<c012eb7b>] finish_wait+0x25/0x4b [<c0144d5b>] kswapd+0x2e9/0x3eb [<c012e960>] autoremove_wake_function+0x0/0x37 [<c0144a72>] kswapd+0x0/0x3eb [<c012e89a>] kthread+0xde/0xe2 [<c012e7bc>] kthread+0x0/0xe2 [<c0102b75>] kernel_thread_helper+0x5/0xb Code: 06 89 d8 ff d2 5b c3 89 da a1 00 de 58 c0 5b e9 4f 52 fe ff 0f 0b ae 00 26 6b 4c c0 eb c5 53 89 c3 85 c0 74 58 8b 80 9c 00 00 00 <8b> 40 20 83 bb 40 01 00 00 20 74 48 85 c0 74 0e 8b 50 14 85 d2 EIP: [<c01750f4>] iput+0xd/0x6b SS:ESP 0069:d5e2fee4 <1>BUG: unable to handle kernel paging request at virtual address 0424cb05 printing eip: c01750f4 03e45000 -> *pde = 00000003:30e43027 0629d000 -> *pme = 00000000:00000000 Oops: 0000 [#2] SMP Modules linked in: CPU: 0 EIP: 0061:[<c01750f4>] Not tainted VLI EFLAGS: 00210282 (2.6.18.8-domU-linode7 #1) EIP is at iput+0xd/0x6b eax: 0424cae5 ebx: c6e132c8 ecx: c6e132e0 edx: c6e132e0 esi: 0000007f edi: 00000000 ebp: d61b983c esp: d09bfd1c ds: 007b es: 007b ss: 0069 Process httpd (pid: 3677, ti=d09be000 task=d5ce8ab0 task.ti=d09be000) Stack: d20b9094 c0173edf d20b9094 0000007f c0174031 00000080 00000000 00003d54 c137eac0 00000090 00013d55 c0174094 c01439a0 00000003 d5c2c878 d5c2c874 00000018 00000000 c05c6fe0 00000100 000280d2 00000000 0000000a 00000040 Call Trace: [<c0173edf>] prune_one_dentry+0x54/0x75 [<c0174031>] prune_dcache+0x131/0x15b [<c0174094>] shrink_dcache_memory+0x39/0x3b [<c01439a0>] shrink_slab+0x111/0x186 [<c0144f94>] try_to_free_pages+0x137/0x1f3 [<c0140ae7>] __alloc_pages+0x12e/0x2d2 [<c0158316>] shmem_swp_alloc+0x8b/0x277 [<c015894f>] shmem_getpage+0x10e/0x5e1 [<c01598fc>] shmem_nopage+0x97/0xce [<c014bc6c>] __handle_mm_fault+0x28a/0x15db [<c013de28>] __generic_file_aio_read+0x16e/0x243 [<c013bc05>] file_read_actor+0x0/0xc7 [<c015db64>] do_sync_read+0xc1/0x11c [<c0166e61>] cp_new_stat64+0xf4/0x106 [<c01103ea>] do_page_fault+0x10f/0xc70 [<c015daa3>] do_sync_read+0x0/0x11c [<c015df67>] vfs_read+0xa2/0x160 [<c015ea2e>] sys_read+0x41/0x6a [<c01102db>] do_page_fault+0x0/0xc70 [<c01052e3>] error_code+0x2b/0x30 Code: 06 89 d8 ff d2 5b c3 89 da a1 00 de 58 c0 5b e9 4f 52 fe ff 0f 0b ae 00 26 6b 4c c0 eb c5 53 89 c3 85 c0 74 58 8b 80 9c 00 00 00 <8b> 40 20 83 bb 40 01 00 00 20 74 48 85 c0 74 0e 8b 50 14 85 d2 EIP: [<c01750f4>] iput+0xd/0x6b SS:ESP 0069:d09bfd1c <1>BUG: unable to handle kernel paging request at virtual address 010d9559 printing eip: c015a531 06139000 -> *pde = 00000003:69252027 0768e000 -> *pme = 00000000:00000000 Oops: 0002 [#3] SMP Modules linked in: CPU: 0 EIP: 0061:[<c015a531>] Not tainted VLI EFLAGS: 00210082 (2.6.18.8-domU-linode7 #1) EIP is at free_block+0x83/0xfe eax: c5883000 ebx: d5c2fa80 ecx: c6e13000 edx: 010d9555 esi: c6e13a00 edi: d5cac080 ebp: d5e30dd8 esp: c6679d24 ds: 007b es: 007b ss: 0069 Process httpd (pid: 3680, ti=c6678000 task=d4c54030 task.ti=c6678000) Stack: c01426ea 0000001b 00000011 d5e30d94 d5cac080 c7a28618 d5cb5400 c015a6df 00000000 d5e30d80 0000001b d5c2fa80 d5e30d80 00000000 c7a28618 00000000 c015a3f4 c7a286b0 c7a286b0 c6679da0 00000029 c01750ce c7a286b8 c0175b6f Call Trace: [<c01426ea>] pagevec_lookup+0x1c/0x24 [<c015a6df>] cache_flusharray+0x55/0xc1 [<c015a3f4>] kmem_cache_free+0xc8/0xee [<c01750ce>] destroy_inode+0x2e/0x47 [<c0175b6f>] dispose_list+0x6e/0xcf [<c0175db6>] shrink_icache_memory+0x1e6/0x223 [<c01439a0>] shrink_slab+0x111/0x186 [<c0144f94>] try_to_free_pages+0x137/0x1f3 [<c0140ae7>] __alloc_pages+0x12e/0x2d2 [<c014c9a5>] __handle_mm_fault+0xfc3/0x15db [<c013de28>] __generic_file_aio_read+0x16e/0x243 [<c013bc05>] file_read_actor+0x0/0xc7 [<c014e8fb>] vma_adjust+0x11a/0x412 [<c0166e61>] cp_new_stat64+0xf4/0x106 [<c01103ea>] do_page_fault+0x10f/0xc70 [<c015dfc5>] vfs_read+0x100/0x160 [<c014f5a1>] sys_brk+0xe5/0xef [<c01102db>] do_page_fault+0x0/0xc70 [<c01052e3>] error_code+0x2b/0x30 Code: 00 40 c1 ea 0c c1 e2 05 03 15 a8 2a 5e c0 8b 02 f6 c4 40 75 7c 8b 02 84 c0 79 7e 8b 4a 1c 8b 44 24 20 8b 5c 87 50 8b 11 8b 41 04 <89> 42 04 89 10 c7 01 00 01 10 00 c7 41 04 00 02 20 00 2b 71 0c EIP: [<c015a531>] free_block+0x83/0xfe SS:ESP 0069:c6679d24 -Chris _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-Mar-03 07:55 UTC
Re: [Xen-devel] unable to handle kernel paging request at virtual address
The virtual addresses being faulted on are all rubbish non-kernel addresses. How long do you run a VM for to see this? Our own (not very in-depth) testing hasn''t shown up anything like this. -- Keir On 3/3/08 00:44, "Christopher S. Aker" <caker@theshore.net> wrote:> Xen : 3.2.0 64bit > dom0: 2.6.16.33 32pae > domU: linux-2.6.18-xen.hg @ 416:08e85e79c65d 32pae > > Our server infrastructure automatically logs console output from domUs > that panic. I have available to me another dozen or so BUG outputs > similar but not identical to the example below, if this piques anyone''s > interest... > > BUG: unable to handle kernel paging request at virtual address 915536e9 > printing eip: > 03e45000 -> *pde = 00000002:d2d46027 > 15b9a000 -> *pme = 00000000:00000000 > Oops: 0000 [#1] > SMP > Modules linked in: > CPU: 0 > EIP: 0061:[<c01750f4>] Not tainted VLI > EFLAGS: 00010286 (2.6.18.8-domU-linode7 #1) > EIP is at iput+0xd/0x6b > eax: 915536c9 ebx: c6e130d4 ecx: c6e130ec edx: c6e130ec > esi: 0000002f edi: 00000000 ebp: d61b983c esp: d5e2fee4 > ds: 007b es: 007b ss: 0069 > Process kswapd0 (pid: 114, ti=d5e2e000 task=d5dc0ab0 task.ti=d5e2e000) > Stack: d20b9214 c0173edf d20b9214 0000002f c0174031 00000080 00000050 > 00003d54 > c137eac0 00000088 00013c51 c0174094 c01439a0 c012eb7b 00000000 > c0513400 > 0000000c 00000000 c05c6fe0 00000080 000000d0 00000000 00000002 > c0513400 > Call Trace: > [<c0173edf>] prune_one_dentry+0x54/0x75 > [<c0174031>] prune_dcache+0x131/0x15b > [<c0174094>] shrink_dcache_memory+0x39/0x3b > [<c01439a0>] shrink_slab+0x111/0x186 > [<c012eb7b>] finish_wait+0x25/0x4b > [<c0144d5b>] kswapd+0x2e9/0x3eb > [<c012e960>] autoremove_wake_function+0x0/0x37 > [<c0144a72>] kswapd+0x0/0x3eb > [<c012e89a>] kthread+0xde/0xe2 > [<c012e7bc>] kthread+0x0/0xe2 > [<c0102b75>] kernel_thread_helper+0x5/0xb > Code: 06 89 d8 ff d2 5b c3 89 da a1 00 de 58 c0 5b e9 4f 52 fe ff 0f 0b > ae 00 26 6b 4c c0 eb c5 53 89 c3 85 c0 74 58 8b 80 9c 00 00 00 <8b> 40 > 20 83 bb > 40 01 00 00 20 74 48 85 c0 74 0e 8b 50 14 85 d2 > EIP: [<c01750f4>] iput+0xd/0x6b SS:ESP 0069:d5e2fee4 > <1>BUG: unable to handle kernel paging request at virtual address 0424cb05 > printing eip: > c01750f4 > 03e45000 -> *pde = 00000003:30e43027 > 0629d000 -> *pme = 00000000:00000000 > Oops: 0000 [#2] > SMP > Modules linked in: > CPU: 0 > EIP: 0061:[<c01750f4>] Not tainted VLI > EFLAGS: 00210282 (2.6.18.8-domU-linode7 #1) > EIP is at iput+0xd/0x6b > eax: 0424cae5 ebx: c6e132c8 ecx: c6e132e0 edx: c6e132e0 > esi: 0000007f edi: 00000000 ebp: d61b983c esp: d09bfd1c > ds: 007b es: 007b ss: 0069 > Process httpd (pid: 3677, ti=d09be000 task=d5ce8ab0 task.ti=d09be000) > Stack: d20b9094 c0173edf d20b9094 0000007f c0174031 00000080 00000000 > 00003d54 > c137eac0 00000090 00013d55 c0174094 c01439a0 00000003 d5c2c878 > d5c2c874 > 00000018 00000000 c05c6fe0 00000100 000280d2 00000000 0000000a > 00000040 > Call Trace: > [<c0173edf>] prune_one_dentry+0x54/0x75 > [<c0174031>] prune_dcache+0x131/0x15b > [<c0174094>] shrink_dcache_memory+0x39/0x3b > [<c01439a0>] shrink_slab+0x111/0x186 > [<c0144f94>] try_to_free_pages+0x137/0x1f3 > [<c0140ae7>] __alloc_pages+0x12e/0x2d2 > [<c0158316>] shmem_swp_alloc+0x8b/0x277 > [<c015894f>] shmem_getpage+0x10e/0x5e1 > [<c01598fc>] shmem_nopage+0x97/0xce > [<c014bc6c>] __handle_mm_fault+0x28a/0x15db > [<c013de28>] __generic_file_aio_read+0x16e/0x243 > [<c013bc05>] file_read_actor+0x0/0xc7 > [<c015db64>] do_sync_read+0xc1/0x11c > [<c0166e61>] cp_new_stat64+0xf4/0x106 > [<c01103ea>] do_page_fault+0x10f/0xc70 > [<c015daa3>] do_sync_read+0x0/0x11c > [<c015df67>] vfs_read+0xa2/0x160 > [<c015ea2e>] sys_read+0x41/0x6a > [<c01102db>] do_page_fault+0x0/0xc70 > [<c01052e3>] error_code+0x2b/0x30 > Code: 06 89 d8 ff d2 5b c3 89 da a1 00 de 58 c0 5b e9 4f 52 fe ff 0f 0b > ae 00 26 6b 4c c0 eb c5 53 89 c3 85 c0 74 58 8b 80 9c 00 00 00 <8b> 40 > 20 83 bb > 40 01 00 00 20 74 48 85 c0 74 0e 8b 50 14 85 d2 > EIP: [<c01750f4>] iput+0xd/0x6b SS:ESP 0069:d09bfd1c > <1>BUG: unable to handle kernel paging request at virtual address 010d9559 > printing eip: > c015a531 > 06139000 -> *pde = 00000003:69252027 > 0768e000 -> *pme = 00000000:00000000 > Oops: 0002 [#3] > SMP > Modules linked in: > CPU: 0 > EIP: 0061:[<c015a531>] Not tainted VLI > EFLAGS: 00210082 (2.6.18.8-domU-linode7 #1) > EIP is at free_block+0x83/0xfe > eax: c5883000 ebx: d5c2fa80 ecx: c6e13000 edx: 010d9555 > esi: c6e13a00 edi: d5cac080 ebp: d5e30dd8 esp: c6679d24 > ds: 007b es: 007b ss: 0069 > Process httpd (pid: 3680, ti=c6678000 task=d4c54030 task.ti=c6678000) > Stack: c01426ea 0000001b 00000011 d5e30d94 d5cac080 c7a28618 d5cb5400 > c015a6df > 00000000 d5e30d80 0000001b d5c2fa80 d5e30d80 00000000 c7a28618 > 00000000 > c015a3f4 c7a286b0 c7a286b0 c6679da0 00000029 c01750ce c7a286b8 > c0175b6f > Call Trace: > [<c01426ea>] pagevec_lookup+0x1c/0x24 > [<c015a6df>] cache_flusharray+0x55/0xc1 > [<c015a3f4>] kmem_cache_free+0xc8/0xee > [<c01750ce>] destroy_inode+0x2e/0x47 > [<c0175b6f>] dispose_list+0x6e/0xcf > [<c0175db6>] shrink_icache_memory+0x1e6/0x223 > [<c01439a0>] shrink_slab+0x111/0x186 > [<c0144f94>] try_to_free_pages+0x137/0x1f3 > [<c0140ae7>] __alloc_pages+0x12e/0x2d2 > [<c014c9a5>] __handle_mm_fault+0xfc3/0x15db > [<c013de28>] __generic_file_aio_read+0x16e/0x243 > [<c013bc05>] file_read_actor+0x0/0xc7 > [<c014e8fb>] vma_adjust+0x11a/0x412 > [<c0166e61>] cp_new_stat64+0xf4/0x106 > [<c01103ea>] do_page_fault+0x10f/0xc70 > [<c015dfc5>] vfs_read+0x100/0x160 > [<c014f5a1>] sys_brk+0xe5/0xef > [<c01102db>] do_page_fault+0x0/0xc70 > [<c01052e3>] error_code+0x2b/0x30 > Code: 00 40 c1 ea 0c c1 e2 05 03 15 a8 2a 5e c0 8b 02 f6 c4 40 75 7c 8b > 02 84 c0 79 7e 8b 4a 1c 8b 44 24 20 8b 5c 87 50 8b 11 8b 41 04 <89> 42 > 04 89 10 > c7 01 00 01 10 00 c7 41 04 00 02 20 00 2b 71 0c > EIP: [<c015a531>] free_block+0x83/0xfe SS:ESP 0069:c6679d24 > > > -Chris > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Christopher S. Aker
2008-Mar-03 15:03 UTC
Re: [Xen-devel] unable to handle kernel paging request at virtual address
Keir Fraser wrote:> The virtual addresses being faulted on are all rubbish non-kernel addresses. > How long do you run a VM for to see this? Our own (not very in-depth) > testing hasn''t shown up anything like this.Not long -- perhaps a few days to a week. Googling around, I''m finding similar posts for non-xen kernels, on newer kernels, too: http://tinyurl.com/2nwtju Any chance this is a bug in Linux itself? -Chris _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-Mar-03 15:16 UTC
Re: [Xen-devel] unable to handle kernel paging request at virtual address
On 3/3/08 15:03, "Christopher S. Aker" <caker@theshore.net> wrote:> Keir Fraser wrote: >> The virtual addresses being faulted on are all rubbish non-kernel addresses. >> How long do you run a VM for to see this? Our own (not very in-depth) >> testing hasn''t shown up anything like this. > > Not long -- perhaps a few days to a week. > > Googling around, I''m finding similar posts for non-xen kernels, on newer > kernels, too: > > http://tinyurl.com/2nwtju > > Any chance this is a bug in Linux itself?Well, it may just be a fairly generic place to crash if things have become corrupted. I wonder if it could be related to gcc version, or perhaps if you see this on one physical host only it might be a faulty DIMM? -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Christopher S. Aker
2008-Mar-03 15:58 UTC
Re: [Xen-devel] unable to handle kernel paging request at virtual address
Keir Fraser wrote:>> Any chance this is a bug in Linux itself? > > Well, it may just be a fairly generic place to crash if things have become > corrupted. I wonder if it could be related to gcc version, or perhaps if you > see this on one physical host only it might be a faulty DIMM?This happens across multiple host servers so I wouldn''t suspect faulty hardware in this case. gcc version 4.0.3 (Ubuntu 4.0.3-1ubuntu5) Is there a recommended gcc version? What are you guys using? -Chris _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Christopher S. Aker
2008-Mar-03 16:06 UTC
Re: [Xen-devel] unable to handle kernel paging request at virtual address
Here''s another report from a user who crashed, then was trying to umount a partition... # umount /home sb orphan head is 176344 sb_info orphan list: BUG: unable to handle kernel paging request at virtual address 017902bd printing eip: c0315435 10723000 -> *pde = 00000002:2cca1027 0336b000 -> *pme = 00000000:00000000 Oops: 0000 [#1] SMP Modules linked in: CPU: 0 EIP: 0061:[<c0315435>] Not tainted VLI EFLAGS: 00010097 (2.6.18.8-domU-linode7 #1) EIP is at vsnprintf+0x437/0x615 eax: 017902bd ebx: ffffffff ecx: 017902bd edx: fffffffe esi: c05d118b edi: c05d1580 ebp: c04d325c esp: d5969e04 ds: 007b es: 007b ss: 0069 Process umount (pid: 2314, ti=d5968000 task=c1d54570 task.ti=d5968000) Stack: 00000fff c1306180 c052e3a0 c0349c51 00002270 00000015 00000400 c05d1180 00002285 00000000 ffffffff ffffffff d5969ee4 c04d325c 00000400 d594612c 00000000 d5dc8e00 c0315627 d5969ee0 c3611e5c c011c30f d5969ee0 c04847ef Call Trace: [<c0349c51>] kcons_write+0x0/0xc9 [<c0315627>] vscnprintf+0x14/0x21 [<c011c30f>] vprintk+0x6e/0x3c1 [<c04847ef>] __wait_on_bit+0x54/0x5e [<c015fc64>] sync_buffer+0x0/0x34 [<c012ea2d>] wake_bit_function+0x0/0x3c [<c011c67d>] printk+0x1b/0x1f [<c01d6fab>] ext3_put_super+0x12b/0x1f6 [<c0163e86>] generic_shutdown_super+0x82/0x121 [<c0163f42>] kill_block_super+0x1d/0x2d [<c01640f0>] deactivate_super+0x53/0x66 [<c0178421>] sys_umount+0x3e/0x21e [<c01103ea>] do_page_fault+0x10f/0xc70 [<c014e7cb>] unmap_region+0x101/0x117 [<c0166f1c>] sys_stat64+0xf/0x33 [<c014dc98>] remove_vma+0x31/0x36 [<c014ee2e>] do_munmap+0x177/0x1cd [<c0178618>] sys_oldumount+0x17/0x1b [<c0105137>] syscall_call+0x7/0xb Code: 42 fc ff ff 8b 44 24 4c 83 c0 04 89 44 24 30 8b 54 24 4c 8b 0a 81 f9 ff 0f 00 00 b8 d5 f6 4d c0 0f 46 c8 8b 54 24 2c 89 c8 eb 06 <80> 38 00 74 07 40 4a 83 fa ff 75 f4 29 c8 89 c3 8b 6c 24 28 f6 EIP: [<c0315435>] vsnprintf+0x437/0x615 SS:ESP 0069:d5969e04 BUG: warning at kernel/exit.c:854/do_exit() [<c011e4cd>] do_exit+0x8c6/0x8cb [<c011c67d>] printk+0x1b/0x1f [<c0310069>] cfq_dispatch_requests+0x4e4/0x5a0 [<c0105f2e>] die+0x2c6/0x2d7 [<c0110a62>] do_page_fault+0x787/0xc70 [<c0124cc9>] run_timer_softirq+0x3a/0x1d3 [<c034a409>] handle_input+0x91/0xd8 [<c03163f1>] memmove+0x24/0x2a [<c01102db>] do_page_fault+0x0/0xc70 [<c01052e3>] error_code+0x2b/0x30 [<c0315435>] vsnprintf+0x437/0x615 [<c0349c51>] kcons_write+0x0/0xc9 [<c0315627>] vscnprintf+0x14/0x21 [<c011c30f>] vprintk+0x6e/0x3c1 [<c04847ef>] __wait_on_bit+0x54/0x5e [<c015fc64>] sync_buffer+0x0/0x34 [<c012ea2d>] wake_bit_function+0x0/0x3c [<c011c67d>] printk+0x1b/0x1f [<c01d6fab>] ext3_put_super+0x12b/0x1f6 [<c0163e86>] generic_shutdown_super+0x82/0x121 [<c0163f42>] kill_block_super+0x1d/0x2d [<c01640f0>] deactivate_super+0x53/0x66 [<c0178421>] sys_umount+0x3e/0x21e [<c01103ea>] do_page_fault+0x10f/0xc70 [<c014e7cb>] unmap_region+0x101/0x117 [<c0166f1c>] sys_stat64+0xf/0x33 [<c014dc98>] remove_vma+0x31/0x36 [<c014ee2e>] do_munmap+0x177/0x1cd [<c0178618>] sys_oldumount+0x17/0x1b [<c0105137>] syscall_call+0x7/0xb -Chris _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Andi Kleen
2008-Mar-03 18:15 UTC
[Xen-devel] Re: unable to handle kernel paging request at virtual address
Keir Fraser <keir.fraser@eu.citrix.com> writes:> > Well, it may just be a fairly generic place to crash if things have become > corrupted.It is. dcache tends to use significant parts of memory and when you have memory corruption in that you tend to crash in one of the dcache walkers of which prune_dcache is a example. The usual approaches is to enable slab debugging etc. -Andi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel