Displaying 9 results from an estimated 9 matches for "zap_page_rang".
Did you mean:
zap_page_range
2008 Nov 05
0
[PATCH] blktap: ensure vma->vm_mm''s mmap_sem is being held whenever it is being modified
...vers/xen/blktap/blktap.c 2008-11-05 14:27:58.000000000 +0100
@@ -611,9 +611,13 @@ static int blktap_release(struct inode *
/* Clear any active mappings and free foreign map table */
if (info->vma) {
+ struct mm_struct *mm = info->vma->vm_mm;
+
+ down_write(&mm->mmap_sem);
zap_page_range(
info->vma, info->vma->vm_start,
info->vma->vm_end - info->vma->vm_start, NULL);
+ up_write(&mm->mmap_sem);
kfree(info->vma->vm_private_data);
@@ -993,12 +997,13 @@ static void fast_flush_area(pending_req_
int tapidx)
{
struct gnttab_un...
2019 Sep 05
0
DANGER WILL ROBINSON, DANGER
...NMAP).
> >
> > By following the hmm_mirror API, anytime the target process has a change in
> > its page table (ie virtual address -> page) you will get a callback and all you
> > have to do is clear the page table within the inspector process and flush tlb
> > (use zap_page_range).
> >
> > On page fault within the inspector process the fault callback of vm_ops will
> > get call and from there you call hmm_mirror following its API.
> >
> > Oh also mark the vma with VM_WIPEONFORK to avoid any issue if the
> > inspector process use fork()...
2020 Jul 03
0
[RFC]: mm,power: introduce MADV_WIPEONSUSPEND
...count, gup_flags,
> + pages, NULL, NULL);
get_user_pages_remote() can wait for disk I/O (for swapping stuff back
in), which we'd probably like to avoid here. And I think it can also
wait for userfaultfd handling from userspace? zap_page_range() (which
is what e.g. MADV_DONTNEED uses) might be a better fit, since it can
yank entries out of the page table (forcing the next write fault to
allocate a new zeroed page) without faulting them into RAM.
> + if (count <= 0) {
> +...
2002 Dec 05
1
ext3 Problem in 2.4.20-ac1?
...000
Dec 5 15:48:36 postamt1 kernel: c0125950 c4d16180 00000004 00002000 ef44b128 00003000 2f507541 c0122cef
Dec 5 15:48:36 postamt1 kernel: c1a599b8 00000001 00000000 0804b000 d0b0e080 08048000 00000000 00003000
Dec 5 15:48:36 postamt1 kernel: Call Trace: [set_page_dirty+80/96] [zap_page_range+447/656] [fput+188/224] [exit_mmap+186/304] [do_coredum p+210/222]
Dec 5 15:48:36 postamt1 kernel: [mmput+55/96] [do_exit+145/528] [collect_signal+150/224] [do_signal+495/604] [__mmdrop+47/52] [do_exit+515/528]
Dec 5 15:48:36 postamt1 kernel: [sys_munmap+52/80] [do_invalid_op+0/160] [signal_...
2019 Aug 09
6
[RFC PATCH v6 71/92] mm: add support for remote mapping
From: Mircea C?rjaliu <mcirjaliu at bitdefender.com>
The following two new mm exports are introduced:
* mm_remote_map(struct mm_struct *req_mm,
unsigned long req_hva,
unsigned long map_hva)
* mm_remote_unmap(unsigned long map_hva)
* mm_remote_reset(void)
* rmap_walk_remote(struct page *page,
struct rmap_walk_control *rwc)
This patch
2010 Jan 28
31
[PATCH 0 of 4] aio event fd support to blktap2
Get blktap2 running on pvops.
This mainly adds eventfd support to the userland code. Based on some
prior cleanup to tapdisk-queue and the server object. We had most of
that in XenServer for a while, so I kept it stacked.
1. Clean up IPC and AIO init in tapdisk-server.
[I think tapdisk-ipc in blktap2 is basically obsolete.
Pending a later patch to remove it?]
2. Split tapdisk-queue into
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others
2007 Mar 28
2
[PATCH 2/3] User-space grant table device - main driver
A character device for accessing (in user-space) pages that have been
granted by other domains.
Signed-off-by: Derek Murray <Derek.Murray@cl.cam.ac.uk>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others