Displaying 20 results from an estimated 20000 matches similar to: "lazy mmu and interrupts"
2007 Apr 18
2
pte_offset_map + lazy mmu
Is pte_offset_map allowed to happen within lazy mmu? I presume not,
because you definitely don't want the mapping pte update to be deferred.
Or, more specifically, is kunmap_atomic ever allowed within lazy mmu?
I'm looking at kpte_clear_flush; I've already got a patch which turns
this into a pv_op, along with a Xen implementation. But I think its
probably an excess pv_op for a
2007 Apr 18
2
pte_offset_map + lazy mmu
Is pte_offset_map allowed to happen within lazy mmu? I presume not,
because you definitely don't want the mapping pte update to be deferred.
Or, more specifically, is kunmap_atomic ever allowed within lazy mmu?
I'm looking at kpte_clear_flush; I've already got a patch which turns
this into a pv_op, along with a Xen implementation. But I think its
probably an excess pv_op for a
2007 Aug 23
5
[PATCH] Fix preemptible lazy mode bug
I recently sent off a fix for lazy vmalloc faults which can happen under =
paravirt when lazy mode is enabled. Unfortunately, I jumped the gun a =
bit on fixing this. I neglected to notice that since the new call to =
flush the MMU update queue is called from the page fault handler, it can =
be pre-empted. Both VMI and Xen use per-cpu variables to track lazy =
mode state, as all previous
2007 Aug 23
5
[PATCH] Fix preemptible lazy mode bug
I recently sent off a fix for lazy vmalloc faults which can happen under =
paravirt when lazy mode is enabled. Unfortunately, I jumped the gun a =
bit on fixing this. I neglected to notice that since the new call to =
flush the MMU update queue is called from the page fault handler, it can =
be pre-empted. Both VMI and Xen use per-cpu variables to track lazy =
mode state, as all previous
2007 Apr 18
31
[PATCH 00/28] Updates for firstfloor paravirt-ops patches
Hi Andi,
This is a set of updates for the firstfloor patch queue.
Quick rundown:
revert-mm-x86_64-mm-account-for-module-percpu-space-separately-from-kernel-percpu.patch
separate-module-percpu-space.patch
Update the module percpu accounting patch
fix-ff-allow-percpu-variables-to-be-page-aligned.patch
Make sure the percpu memory allocation is page-aligned
2007 Apr 18
31
[PATCH 00/28] Updates for firstfloor paravirt-ops patches
Hi Andi,
This is a set of updates for the firstfloor patch queue.
Quick rundown:
revert-mm-x86_64-mm-account-for-module-percpu-space-separately-from-kernel-percpu.patch
separate-module-percpu-space.patch
Update the module percpu accounting patch
fix-ff-allow-percpu-variables-to-be-page-aligned.patch
Make sure the percpu memory allocation is page-aligned
2008 May 31
9
[PATCH 0 of 4] mm+paravirt+xen: add pte read-modify-write abstraction (take 2)
Hi all,
[ Change since last post: change name to ptep_modify_prot_, on the
grounds that it isn't really a general pte-modification interface. ]
This little series adds a new transaction-like abstraction for doing
RMW updates to a pte, hooks it into paravirt_ops, and then makes use
of it in Xen.
The basic problem is that mprotect is very slow under Xen (up to 50x
slower than native),
2008 May 31
9
[PATCH 0 of 4] mm+paravirt+xen: add pte read-modify-write abstraction (take 2)
Hi all,
[ Change since last post: change name to ptep_modify_prot_, on the
grounds that it isn't really a general pte-modification interface. ]
This little series adds a new transaction-like abstraction for doing
RMW updates to a pte, hooks it into paravirt_ops, and then makes use
of it in Xen.
The basic problem is that mprotect is very slow under Xen (up to 50x
slower than native),
2008 May 31
9
[PATCH 0 of 4] mm+paravirt+xen: add pte read-modify-write abstraction (take 2)
Hi all,
[ Change since last post: change name to ptep_modify_prot_, on the
grounds that it isn't really a general pte-modification interface. ]
This little series adds a new transaction-like abstraction for doing
RMW updates to a pte, hooks it into paravirt_ops, and then makes use
of it in Xen.
The basic problem is that mprotect is very slow under Xen (up to 50x
slower than native),
2007 Apr 18
1
how set_pte_at()'s vaddr and ptep args relate
Hi Zach,
I'm wondering what the interface requirements of set_pte_at()'s "addr"
and "ptep" args are. I presume that in general the ptep points to the
pte entry which corresponds to the vaddr, but is this necessarily the case?
For example, it is valid to pass a non-highmem page kmap_atomic(), which
will simply return a direct pointer to the page.
kunmap_atomic()
2007 Apr 18
1
how set_pte_at()'s vaddr and ptep args relate
Hi Zach,
I'm wondering what the interface requirements of set_pte_at()'s "addr"
and "ptep" args are. I presume that in general the ptep points to the
pte entry which corresponds to the vaddr, but is this necessarily the case?
For example, it is valid to pass a non-highmem page kmap_atomic(), which
will simply return a direct pointer to the page.
kunmap_atomic()
2007 Apr 18
17
[patch 00/17] paravirt_ops updates
Hi Andi,
This series of patches updates paravirt_ops in various ways. Some of the
changes are plain cleanups and improvements, and some add some interfaces
necessary for Xen.
The brief overview:
add-MAINTAINERS.patch - obvious
remove-CONFIG_DEBUG_PARAVIRT.patch - no longer needed
paravirt-nop.patch - mark nop operations consistently
paravirt-pte-accessors.patch - operations to pack/unpack
2007 Apr 18
17
[patch 00/17] paravirt_ops updates
Hi Andi,
This series of patches updates paravirt_ops in various ways. Some of the
changes are plain cleanups and improvements, and some add some interfaces
necessary for Xen.
The brief overview:
add-MAINTAINERS.patch - obvious
remove-CONFIG_DEBUG_PARAVIRT.patch - no longer needed
paravirt-nop.patch - mark nop operations consistently
paravirt-pte-accessors.patch - operations to pack/unpack
2008 May 23
6
[PATCH 0 of 4] mm+paravirt+xen: add pte read-modify-write abstraction
Hi all,
This little series adds a new transaction-like abstraction for doing
RMW updates to a pte, hooks it into paravirt_ops, and then makes use
of it in Xen.
The basic problem is that mprotect is very slow under Xen (up to 50x
slower than native), primarily because of the
ptent = ptep_get_and_clear(mm, addr, pte);
ptent = pte_modify(ptent, newprot);
/* ... */
set_pte_at(mm, addr, pte,
2008 May 23
6
[PATCH 0 of 4] mm+paravirt+xen: add pte read-modify-write abstraction
Hi all,
This little series adds a new transaction-like abstraction for doing
RMW updates to a pte, hooks it into paravirt_ops, and then makes use
of it in Xen.
The basic problem is that mprotect is very slow under Xen (up to 50x
slower than native), primarily because of the
ptent = ptep_get_and_clear(mm, addr, pte);
ptent = pte_modify(ptent, newprot);
/* ... */
set_pte_at(mm, addr, pte,
2008 May 23
6
[PATCH 0 of 4] mm+paravirt+xen: add pte read-modify-write abstraction
Hi all,
This little series adds a new transaction-like abstraction for doing
RMW updates to a pte, hooks it into paravirt_ops, and then makes use
of it in Xen.
The basic problem is that mprotect is very slow under Xen (up to 50x
slower than native), primarily because of the
ptent = ptep_get_and_clear(mm, addr, pte);
ptent = pte_modify(ptent, newprot);
/* ... */
set_pte_at(mm, addr, pte,
2007 Apr 18
23
[patch 00/20] paravirt_ops updates
Hi Andi,
Here's a repost of the paravirt_ops update series I posted the other day.
Since then, I found a few potential bugs with patching clobbering,
cleaned up and documented paravirt.h and the patching machinery.
Overview:
add-MAINTAINERS.patch
obvious
remove-CONFIG_DEBUG_PARAVIRT.patch
No longer meaningful or needed.
paravirt-nop.patch
Clean up nop paravirt_ops functions, mainly to
2007 Apr 18
23
[patch 00/20] paravirt_ops updates
Hi Andi,
Here's a repost of the paravirt_ops update series I posted the other day.
Since then, I found a few potential bugs with patching clobbering,
cleaned up and documented paravirt.h and the patching machinery.
Overview:
add-MAINTAINERS.patch
obvious
remove-CONFIG_DEBUG_PARAVIRT.patch
No longer meaningful or needed.
paravirt-nop.patch
Clean up nop paravirt_ops functions, mainly to
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
Hello Jeson,
On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote:
> Just to make sure I understand here. For boosting through huge TLB, do
> you mean we can do that in the future (e.g by mapping more userspace
> pages to kenrel) or it can be done by this series (only about three 4K
> pages were vmapped per virtqueue)?
When I answered about the advantages of mmu notifier and
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
Hello Jeson,
On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote:
> Just to make sure I understand here. For boosting through huge TLB, do
> you mean we can do that in the future (e.g by mapping more userspace
> pages to kenrel) or it can be done by this series (only about three 4K
> pages were vmapped per virtqueue)?
When I answered about the advantages of mmu notifier and