search for: copy_page_range

Displaying 8 results from an estimated 8 matches for "copy_page_range".

2007 May 23
0
Apache CGI Performance Big Degration in Dom0 vs. Native
...(estimated) Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (Unhalted core cycles) count 100000 samples % app name symbol name 46048 5.8715 vmlinux-2.6.16.46-0.10-bigsmp kmap_atomic 41587 5.3027 vmlinux-2.6.16.46-0.10-bigsmp copy_page_range 39759 5.0696 vmlinux-2.6.16.46-0.10-bigsmp unmap_vmas 38722 4.9373 vmlinux-2.6.16.46-0.10-bigsmp page_fault 30638 3.9066 vmlinux-2.6.16.46-0.10-bigsmp page_remove_rmap 29481 3.7591 vmlinux-2.6.16.46-0.10-bigsmp __handle_mm_fault Native Prefork: CPU: Core 2, speed 2667.14 MHz (...
2007 May 23
0
Apache CGI Performance Big Degration in Dom0 vs. Native
...(estimated) Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (Unhalted core cycles) count 100000 samples % app name symbol name 46048 5.8715 vmlinux-2.6.16.46-0.10-bigsmp kmap_atomic 41587 5.3027 vmlinux-2.6.16.46-0.10-bigsmp copy_page_range 39759 5.0696 vmlinux-2.6.16.46-0.10-bigsmp unmap_vmas 38722 4.9373 vmlinux-2.6.16.46-0.10-bigsmp page_fault 30638 3.9066 vmlinux-2.6.16.46-0.10-bigsmp page_remove_rmap 29481 3.7591 vmlinux-2.6.16.46-0.10-bigsmp __handle_mm_fault Native Prefork: CPU: Core 2, speed 2667.14 MHz (...
2017 May 21
2
Crash in CentOS 7 kernel-3.10.0-514.16.1.el7.x86_64 in Xen PV mode
...50 [ 32.305004] [<ffffffff81698c7e>] xen_do_hypervisor_callback+0x1e/0x30 [ 32.305004] <EOI> [ 32.305004] [<ffffffff811af916>] ? copy_pte_range+0x2b6/0x5a0 [ 32.305004] [<ffffffff811af8e6>] ? copy_pte_range+0x286/0x5a0 [ 32.305004] [<ffffffff811b24d2>] ? copy_page_range+0x312/0x490 [ 32.305004] [<ffffffff81083012>] ? dup_mm+0x362/0x680 [ 32.305004] [<ffffffff810847ae>] ? copy_process+0x144e/0x1960 [ 32.305004] [<ffffffff81084e71>] ? do_fork+0x91/0x2c0 [ 32.305004] [<ffffffff81085126>] ? SyS_clone+0x16/0x20 [ 32.305004] [<f...
2017 Oct 23
0
Crash in CentOS 7 kernel-3.10.0-514.16.1.el7.x86_64 in Xen PV mode
...<ffffffff81698c7e>] xen_do_hypervisor_callback+0x1e/0x30 > [ 32.305004] <EOI> > [ 32.305004] [<ffffffff811af916>] ? copy_pte_range+0x2b6/0x5a0 > [ 32.305004] [<ffffffff811af8e6>] ? copy_pte_range+0x286/0x5a0 > [ 32.305004] [<ffffffff811b24d2>] ? copy_page_range+0x312/0x490 > [ 32.305004] [<ffffffff81083012>] ? dup_mm+0x362/0x680 > [ 32.305004] [<ffffffff810847ae>] ? copy_process+0x144e/0x1960 > [ 32.305004] [<ffffffff81084e71>] ? do_fork+0x91/0x2c0 > [ 32.305004] [<ffffffff81085126>] ? SyS_clone+0x16/0x20 &g...
2006 Oct 01
4
Kernel BUG at arch/x86_64/mm/../../i386/mm/hypervisor.c:197
Hello list, I just got this ominous bug on my machine, that has already been seen several times: http://lists.xensource.com/archives/html/xen-devel/2006-01/msg00180.html The machine is very similar, it''s a machine with two dual-core opterons, running one of the latest xen-3.0.3-unstable (20060926 hypervisor, and a vanilla 2.6.18 + xen patch from Fedora from 20060915). This machine was
2006 Jul 03
1
Problem with CentOS 4.3 on kernel and ipvsadm
...buffered_rmqueue+0x1c4/0x1e7 Jul 3 04:02:07 lvs2 kernel: [<c014f5f5>] __alloc_pages+0xb3/0x29a Jul 3 04:02:07 lvs2 kernel: [<c011d21e>] pte_alloc_one+0x18/0x49 Jul 3 04:02:07 lvs2 kernel: [<c0159465>] pte_alloc_map+0x66/0x12d Jul 3 04:02:07 lvs2 kernel: [<c0159737>] copy_page_range+0xfe/0x358 Jul 3 04:02:07 lvs2 kernel: [<c0122380>] dup_mmap+0x3de/0x4a6 Jul 3 04:02:07 lvs2 kernel: [<c0121f61>] copy_mm+0x10e/0x14f Jul 3 04:02:07 lvs2 kernel: [<c0123172>] copy_process+0x709/0xd52 Jul 3 04:02:07 lvs2 kernel: [<c0186d5e>] d_alloc+0xc2/0x284 Jul 3...
2006 Oct 04
6
RE: Kernel BUGatarch/x86_64/mm/../../i386/mm/hypervisor.c:197
...t; > <ffffffff80151d57>{get_page_from_freelist+775} > > Oct 3 23:27:52 tuek <ffffffff80151f1d>{__alloc_pages+157} > > <ffffffff80152249>{get_zeroed_page+73} > > Oct 3 23:27:52 tuek <ffffffff80158cf4>{__pmd_alloc+36} > > <ffffffff8015e55e>{copy_page_range+1262} > > Oct 3 23:27:52 tuek <ffffffff802a6bea>{rb_insert_color+250} > > <ffffffff80127cb7>{copy_process+3079} > > Oct 3 23:27:52 tuek <ffffffff80128c8e>{do_fork+238} > > <ffffffff801710d6>{fd_install+54} Oct 3 23:27:52 tuek > > <fffff...
2012 Apr 10
7
[PATCH v3 1/2] xen: enter/exit lazy_mmu_mode around m2p_override calls
This patch is a significant performance improvement for the m2p_override: about 6% using the gntdev device. Each m2p_add/remove_override call issues a MULTI_grant_table_op and a __flush_tlb_single if kmap_op != NULL. Batching all the calls together is a great performance benefit because it means issuing one hypercall total rather than two hypercall per page. If paravirt_lazy_mode is set