Displaying 15 results from an estimated 15 matches for "__pmd_alloc".
Did you mean:
__dma_alloc
2007 Apr 18
0
[RFC/PATCH LGUEST X86_64 02/13] hvvm export page utils
...=====
--- work-pv.orig/mm/memory.c
+++ work-pv/mm/memory.c
@@ -2798,3 +2798,10 @@ int access_process_vm(struct task_struct
return buf - old_buf;
}
EXPORT_SYMBOL_GPL(access_process_vm);
+
+/* temp until we put the hv vm stuff into the kernel */
+EXPORT_SYMBOL_GPL(__pud_alloc);
+EXPORT_SYMBOL_GPL(__pmd_alloc);
+EXPORT_SYMBOL_GPL(__pte_alloc_kernel);
+EXPORT_SYMBOL_GPL(pmd_clear_bad);
+EXPORT_SYMBOL_GPL(pud_clear_bad);
--
2007 Apr 18
0
[RFC/PATCH LGUEST X86_64 02/13] hvvm export page utils
...=====
--- work-pv.orig/mm/memory.c
+++ work-pv/mm/memory.c
@@ -2798,3 +2798,10 @@ int access_process_vm(struct task_struct
return buf - old_buf;
}
EXPORT_SYMBOL_GPL(access_process_vm);
+
+/* temp until we put the hv vm stuff into the kernel */
+EXPORT_SYMBOL_GPL(__pud_alloc);
+EXPORT_SYMBOL_GPL(__pmd_alloc);
+EXPORT_SYMBOL_GPL(__pte_alloc_kernel);
+EXPORT_SYMBOL_GPL(pmd_clear_bad);
+EXPORT_SYMBOL_GPL(pud_clear_bad);
--
2015 Mar 30
2
[PATCH 0/9] qspinlock stuff -v15
..._wp_page
|--1.77%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray
|--0.88%-- __pmd_alloc
|--0.70%-- wake_up_new_task
|--0.66%-- __pud_alloc
|--0.59%-- ext4_discard_preallocations
--6.53%-- [...]
With the qspinlock patch, the perf profile at 1000 users was:
3.25% reaim [kernel.kallsyms] [k] queue_spin_lock_slowp...
2015 Mar 30
2
[PATCH 0/9] qspinlock stuff -v15
..._wp_page
|--1.77%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray
|--0.88%-- __pmd_alloc
|--0.70%-- wake_up_new_task
|--0.66%-- __pud_alloc
|--0.59%-- ext4_discard_preallocations
--6.53%-- [...]
With the qspinlock patch, the perf profile at 1000 users was:
3.25% reaim [kernel.kallsyms] [k] queue_spin_lock_slowp...
2018 Aug 03
0
[net-next, v6, 6/7] net-sysfs: Add interface for Rx queue(s) map per Tx queue
...x32/0x1c0
> [ 7.276424] ? __handle_mm_fault+0xc85/0x3140
> [ 7.276433] ? lock_downgrade+0x5e0/0x5e0
> [ 7.276439] ? mem_cgroup_commit_charge+0xb4/0xf80
> [ 7.276453] ? _raw_spin_unlock+0x24/0x30
> [ 7.276458] ? __handle_mm_fault+0xc85/0x3140
> [ 7.276467] ? __pmd_alloc+0x430/0x430
> [ 7.276473] ? find_held_lock+0x32/0x1c0
> [ 7.276485] ? __fget_light+0x55/0x1f0
> [ 7.276497] ? __sys_sendmsg+0xd2/0x170
> [ 7.276502] __sys_sendmsg+0xd2/0x170
> [ 7.276508] ? __ia32_sys_shutdown+0x70/0x70
> [ 7.276516] ? handle_mm_fault+0x1f9...
2006 Oct 01
4
Kernel BUG at arch/x86_64/mm/../../i386/mm/hypervisor.c:197
Hello list,
I just got this ominous bug on my machine, that has already been seen
several times:
http://lists.xensource.com/archives/html/xen-devel/2006-01/msg00180.html
The machine is very similar, it''s a machine with two dual-core opterons,
running one of the latest xen-3.0.3-unstable (20060926 hypervisor, and a
vanilla 2.6.18 + xen patch from Fedora from 20060915).
This machine was
2018 Mar 19
0
get_user_pages returning 0 (was Re: kernel BUG at drivers/vhost/vhost.c:LINE!)
...36.072170] vhost_update_used_flags+0x379/0x480
[ 36.076895] vhost_vq_init_access+0xca/0x540
[ 36.081276] vhost_net_ioctl+0xee0/0x1920
[ 36.085397] ? vhost_net_stop_vq+0xf0/0xf0
[ 36.089603] ? avc_ss_reset+0x110/0x110
[ 36.093547] ? __handle_mm_fault+0x5ba/0x38c0
[ 36.098012] ? __pmd_alloc+0x4e0/0x4e0
[ 36.101869] ? trace_hardirqs_off+0x10/0x10
[ 36.106161] ? __fd_install+0x25f/0x740
[ 36.110106] ? find_held_lock+0x35/0x1d0
[ 36.114145] ? check_same_owner+0x320/0x320
[ 36.118438] ? rcu_note_context_switch+0x710/0x710
[ 36.123334] ? __do_page_fault+0x5f7/0xc90
[ 36...
2006 Oct 04
6
RE: Kernel BUGatarch/x86_64/mm/../../i386/mm/hypervisor.c:197
...all Trace: <ffffffff801512ad>{bad_page+93}
> > <ffffffff80151d57>{get_page_from_freelist+775}
> > Oct 3 23:27:52 tuek <ffffffff80151f1d>{__alloc_pages+157}
> > <ffffffff80152249>{get_zeroed_page+73}
> > Oct 3 23:27:52 tuek <ffffffff80158cf4>{__pmd_alloc+36}
> > <ffffffff8015e55e>{copy_page_range+1262}
> > Oct 3 23:27:52 tuek <ffffffff802a6bea>{rb_insert_color+250}
> > <ffffffff80127cb7>{copy_process+3079}
> > Oct 3 23:27:52 tuek <ffffffff80128c8e>{do_fork+238}
> > <ffffffff801710d6>{...
2006 Mar 14
12
[RFC] VMI for Xen?
I''m sure everyone has seen the drop of VMI patches for Linux at this
point, but just in case, the link is included below.
I''ve read this version of the VMI spec and have made my way through most
of the patches. While I wasn''t really that impressed with the first
spec wrt Xen, the second version seems to be much more palatable.
Specifically, the code inlining and
2015 Mar 16
19
[PATCH 0/9] qspinlock stuff -v15
Hi Waiman,
As promised; here is the paravirt stuff I did during the trip to BOS last week.
All the !paravirt patches are more or less the same as before (the only real
change is the copyright lines in the first patch).
The paravirt stuff is 'simple' and KVM only -- the Xen code was a little more
convoluted and I've no real way to test that but it should be stright fwd to
make work.
2015 Mar 16
19
[PATCH 0/9] qspinlock stuff -v15
Hi Waiman,
As promised; here is the paravirt stuff I did during the trip to BOS last week.
All the !paravirt patches are more or less the same as before (the only real
change is the copyright lines in the first patch).
The paravirt stuff is 'simple' and KVM only -- the Xen code was a little more
convoluted and I've no real way to test that but it should be stright fwd to
make work.
2015 Apr 07
18
[PATCH v15 00/15] qspinlock: a 4-byte queue spinlock with PV support
...8%-- do_wp_page
|--1.77%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray
|--0.88%-- __pmd_alloc
|--0.70%-- wake_up_new_task
|--0.66%-- __pud_alloc
|--0.59%-- ext4_discard_preallocations
--6.53%-- [...]
With the qspinlock patch, the perf profile at 1000 users was:
3.25% reaim [kernel.kallsyms] [k] queue_spin_lock_slowpath...
2015 Apr 07
18
[PATCH v15 00/15] qspinlock: a 4-byte queue spinlock with PV support
...8%-- do_wp_page
|--1.77%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray
|--0.88%-- __pmd_alloc
|--0.70%-- wake_up_new_task
|--0.66%-- __pud_alloc
|--0.59%-- ext4_discard_preallocations
--6.53%-- [...]
With the qspinlock patch, the perf profile at 1000 users was:
3.25% reaim [kernel.kallsyms] [k] queue_spin_lock_slowpath...
2015 Apr 24
16
[PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support
...8%-- do_wp_page
|--1.77%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray
|--0.88%-- __pmd_alloc
|--0.70%-- wake_up_new_task
|--0.66%-- __pud_alloc
|--0.59%-- ext4_discard_preallocations
--6.53%-- [...]
With the qspinlock patch, the perf profile at 1000 users was:
3.25% reaim [kernel.kallsyms] [k] queue_spin_lock_slowpath...
2015 Apr 24
16
[PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support
...8%-- do_wp_page
|--1.77%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray
|--0.88%-- __pmd_alloc
|--0.70%-- wake_up_new_task
|--0.66%-- __pud_alloc
|--0.59%-- ext4_discard_preallocations
--6.53%-- [...]
With the qspinlock patch, the perf profile at 1000 users was:
3.25% reaim [kernel.kallsyms] [k] queue_spin_lock_slowpath...